Threads for aryeh

  1. 1

    Yes, but to model what? Put another way, what subset of “applications” is it best suited to? Is a simulation the same as a line-of-business application the same as a control system the same as a program that just transforms data?

    1. 1

      Lets just hope that one increments the correct step result in your functional code. It is easy in a trivial example to ensure it is. But the problem of needing to correctly maintain multiple versions of that returned “state” in complex code is somewhat glossed over.

      Perhaps also your criticisms reflect ambiguities in you language code. 1. Internal object state is not entirely encapsulated. 2. Your (poorly encapsulated) object’s public interface includes no “reset” behavior so each object’s instantiation leaves no ambiguity that the count is set on each instantiation. 3. “Threshold” should be encapsulated such that once provided, it performs internally as intended..

      1. 21

        Oh is it time to hype dsls again? That makes sense as we’re starting to all get a little embarrassed about the levels of hype for functional programming.

        I guess next we’ll be hyping up memory safe object oriented programming.

        1. 16

          I’m just sitting here with my Java books waiting for the pendulum to swing back…

          1. 9

            I’m going to go long on eiffel books.

            1. 5

              I think a language heavily inspired by Eiffel, while fixing all of its (many, many) dumb mistakes, could go really far.

              1. 2

                I’ve just started learning Eiffel and like what ive seen so far, just curious what do you consider its mistakes?

                1. 8
                  1. CAT-calling
                  2. Bertrand Meyer’s absolute refusal to use any standard terminology for anything in Eiffel. He calls nulls “voids”, lambdas “agents”, modules “clusters”, etc.
                  3. Also his refusal to adopt any PL innovations past 1995, like all the contortions you have to do to get “void safety” (null safety) instead of just adding some dang sum types.
                2. 1

                  Racket!

            2. 14

              I, personally, very much doubt full on OOP will ever come back in the same way it did in the 90s and early 2000s. FP is overhyped by some, but “newer” languages I’ve seen incorporate ideas from FP and explicitly exclude core ideas of OOP (Go, Zig, Rust, etc.).

              1. 5

                I mean, all of those languages have a way to do dynamic dispatch (interfaces in Go, trait objects in Rust, vtables in Zig as of 0.10).

                1. 13

                  And? They also all support first-class functions from FP but nobody calls them FP languages. Inheritance is the biggest thing missing, and for good reason.

                  1. 12

                    This, basically. Single dynamic dispatch is one of the few things from Java-style OO worth keeping. Looking at other classic-OO concepts: inheritance is better off missing most of the time (some will disagree), classes as encapsulation are worse than structs and modules, methods don’t need to be attached to classes or defined all in one batch, everything is not an object inheriting from a root object… did I miss anything?

                    Subtyping separate from inheritance is a useful concept, but from what I’ve seen the world seldom breaks down into such neat categories to make subtyping simple enough to use – unsigned integers are the easiest example. Plus, as far as I can tell it makes most current type system math explode. So, needs more theoretical work before it wiggles back into the mainstream.

                    1. 8

                      I’ve been thinking a lot about when inheritance is actually a good idea, and I think it comes down to two conditions:

                      1. The codebase will instantiate both Parent and Child objects
                      2. Anything that accepts a Parent will have indistinguishable behavior when passed a Child object (LSP).

                      IE a good use of Inheritance is to subclass EventReader with ProfiledEventReader.

                      1. 10

                        Take a cookie from a jar for using both LSP and LSP in a single discussion!

                        1. 4

                          Inheritance can be very useful when it’s decoupled from method dispatch.

                          Emacs mode definitions are a great example. Nary a class nor a method in sight, but the fact that markdown-mode inherits from text-mode etc is fantastically useful!

                          On the other hand, I think it’s fair to say that this is so different from OOP’s definition of inheritance that using the same word for it is just asking for confusion. (I disagree but it’s a reasonable argument.)

                          1. 2

                            Inheritance works wonderfully in object systems with multiple dispatch, although I’m not qualified to pinpoint what is it that makes them click together.

                            1. 1

                              I’ve lately come across a case where inheritance is a Good Idea; if you’re plotting another of your fabulous blog posts on this, I’m happy to chat :)

                              1. 1

                                My impression is that inheritance is extremely useful for a peculiar kind of composition, namely open recursion. For example, you write some sort of visitor-like pattern in a virtual class, then inherit it, implement the visit method or what have you, and use this to recurse between the abstract behavior of traversing some structure, and your use-case-specific code. Without recursion you have to basically reimplement a vtable by hand and it sucks.

                                Well, that’s my only use of inheritance in OCaml. Most of the code is just functions, sum types, records, and modules.

                                1. 1

                                  Forrest for the trees? When you want to create a framework that has default behaviour that can be changed, extended or overridden?

                                2. 4
                                  • obj.method syntax for calling functions — a decent idea worth keeping.
                                  • bundling behavior, mutable state, and identity into one package — not worth doing unless you are literally Erlang.
                                  1. 3

                                    IMO there is a fundamental difference between Erlang OO and Java OO to the point that bringing them up in the same conversation is rarely useful. Erlang actively discourages you from having pellets of mutable state scattered around your program: sure, threads are cheap, but that state clump is still a full-blown thread you need to care for. It needs rules on supervision, it needs an API of some kind to communicate, etc, etc. Erlang is at it’s best when you only use threads when you are at a concurrency boundary, and otherwise treat it as purely functional. Java, in contrast, encourages you to make all sorts of objects with mutable state all over the place in your program. I’d wager that MOST non-trivial methods in Java contain the “new” keyword. This results in a program with “marbled” state, which is difficult to reason about, debug, or apply any kind of static analysis to.

                                  2. 2

                                    In all honesty, you sound quite apologetic to what could be arguably considered objectively bad design.

                                    Attaching methods to types essentially boils down to scattering data (state) all over the code and writing non pure functions. Why honestly cannot understand how anyone would think this is a good idea. Other than being influenced by trends or cults or group thinking.

                                    Almost the same could be said about inheritance. Why would fiting a data model in a unique universal tree be a good idea? Supposedly to implicitly import functionality from parent classes without repeating yourself. Quite a silly way to save a line of code. Specially considering the languages that do it are rather verbose.

                                    1. 4

                                      Why honestly cannot understand how anyone would think this is a good idea. Other than being influenced by trends or cults or group thinking.

                                      Here’s a pro tip that has served me well over many years. Whenever I see millions of otherwise reasonable people doing a thing that is obviously a terribly stupid idea, it is always a lack of understanding on my part about what’s going on. Either I am blind to all of the pros of what they are doing and only see the cons, or what they’re doing is bad at one level but good at a different level in a way that outbalances it, or they are operating under constraints that I don’t see or pretend can be ignored, or something else along those lines.

                                      Billions of lines of successful shipped software have been written in object-oriented languages. Literally trillions of dollars of economic value have been generated by this software. Millions of software developers have spent decades of their careers doing this. The though that they are all under some sort of collective masochistic delusion simply does pass Hanlon’s Razor.

                                      1. 1

                                        To be honest, the more I study OOP (or rather, the hodgepodge of features and mechanisms that are claimed by various groups to be OOP), the less room I see for a genuine advantage.

                                        Except one: instantiation.

                                        Say you have a piece of state, composed of a number of things (say a couple integers, a boolean and a string), that represent some coherent whole (say the state of a lexer). The one weird trick is that instead of letting those be global variables, you put them in a struct. And now you can have several lexers running at the same time, isn’t that amazing?

                                        Don’t laugh, before OOP was popular very prominent people thought it was a good idea to have global state in Lex, Yacc, or error handling (errno). So here’s my current guess: the success we attribute to OOP doesn’t really come from any of its overly hyped features. It comes from a couple very mundane, yet very good programming practices it adopted along the way. People attributed to the hyped stuff (such as inheritance) a success they have earned mostly by avoiding global variables.

                                        Abstract data types are amazing, and used everywhere for decades, including good old C. The rest of OOP though? Contextual at best.

                                      2. 3

                                        Many decisions are only clearly good or bad in retrospect.

                                    2. 6

                                      Inheritance is the biggest thing missing, and for good reason.

                                      That reason being “inheritance was the very first mechanism for subtyping, ADTs, and code-reuse, and people using it got ideas for better mechanisms from it.” ;)

                                      1. 1

                                        Exactly!

                                      2. 3

                                        The first versions of Simula and Smalltalk didn’t have inheritance either. Self and other prototypal object-oriented languages don’t use traditional inheritance either. We still call all of them object-oriented.

                                        Honestly, it’s well beyond time that we retire all programming language paradigm terms. Modern languages simply aren’t organized into paradigms they way older simpler languages were.

                                        It’s like we’re looking at a Honda Accord and arguing over whether it’s a penny farthing or a carriage. The taxonomy no longer makes sense.

                                    3. 1

                                      Ah yes and that’s why it’s ripe to have a come back. :)

                                      Seriously though I expect that the next incarnation will be “oop without inheritance” or something. Probably combined with some large corporation “inventing” gc-less memory management.

                                      1. 2

                                        The good parts of OOP never really left. We already have that exact language: Rust. It has formal interfaces (Traits), encapsulation, polymorphism, and gc-less memory management.

                                        1. 10

                                          The main thing about OOP that needs to die is the idea that OOP is a coherent concept worth discussing on its own. Talk about the individual concepts as independent things! It’s much more productive.

                                          1. 1

                                            Talk about the individual concepts as independent things!

                                            IMO OOP these days really means inheritance and an object lifecycle. All the other concepts aren’t really unique to OOP.

                                            1. 3

                                              I think “OOP” generally means “features of object-oriented languages that I don’t like” to a lot of people. The people using those languages don’t generally get into paradigm arguments.

                                              (Personally, I consider inheritance to be common in OOP languages but not a particularly interesting or salient part of them. Many early OOP languages didn’t have inheritance and prototypal ones have an entirely different code reuse model.)

                                              1. 1

                                                For some people “OOP” means “features of languages I do like”. For instance I’ve seen people include templates/generics/parametric polymorphism and unnamed functions as core parts of OOP… having learned CamlLight (OCaml without the “O”) in college, I confessed I was quite astonished.

                                              2. 2

                                                You say that but it means different things to different people. I don’t disagree that your definition would be a good one if you could get people to agree on it, but I can’t assume that when other people say “OOP” that’s what they’re talking about.

                                        2. 1

                                          I think it will come back, rediscovered as something new by a new generation disillusioned with whatever has been the cool solves-everything paradigm of the previous half decade. Perhaps this time as originally envisaged with a “Scandinavian school” modeling approach.

                                          Of course it never left as the first choice for one genre of software… the creation of frameworks featuring default behavior that can be overridden, extended or changed.

                                          Those languages you mention (Go, Zig, Rust) are primarily languages solving problems in the computer and data sciences, computing infrastructure and technical capability spaces. Something is going to be needed to replace or update all those complex aging ignored line-of-business systems.

                                        3. 11

                                          There isn’t really any need to “hype” DSLs because they’re already widely used in all domains of programming:

                                          • front end: HTML / CSS / JavaScript, and most JS web frameworks introduce a new DSL (multiple JSX-like languages, Svelte, etc.)
                                          • back end: a bajillion SQL variants, a bazillion query languages like Redis
                                          • builds: generating Ninja, generating Make (CMake, Meson, etc.)
                                            • there at least 10 CI platforms with their own YAML DSLs, with vars, interpolation, control flow, etc.
                                          • In games: little scripting languages for every popular game
                                          • Graphics: scene description languages, shader languages
                                          • Compilers: LLVM has its own TableGen language, languages for describing compiler optimizations and architecture (in the implementation of Go, a famously “not DSL” language), languages for describing VMs (Ruby)
                                          • Machine Learning: PyTorch, TensorFlow, etc. (these are their own languages, on top of Python)
                                          • Distributed computing: at least 10 MapReduce-derived frameworks/languages; there are internal DSLs in Scala for example, as well as external ones
                                          • Mathematics and CS: Coq, Lem, etc.

                                          All of these categories can be fractally expanded, e.g. I didn’t mention the dozens of languages here: https://github.com/oilshell/oil/wiki/Survey-of-Config-Languages – many of which are commonly used and featured on this site

                                          If you think you don’t use DSLs, then you’re probably just working on a small part of a system, and ignoring the parts you’re not working on.

                                          ALL real systems use tons of DSLs. I think the real issue is to mitigate the downsides

                                          1. 1

                                            Oh yes but at the same time if you haven’t seen the hype for DSLs then you haven’t spent long enough in the industry to go through that part of the hype cycle. DSLs are what they are and it looks like we might be entering a hype cycle where people want to make them out to be much more.

                                            1. 3

                                              I don’t agree, I’ve been in the industry for 20+ years, there are plenty of things more hyped than DSLs (cloud, machine learning, etc.)

                                              DSLs are accepted standard practice, and widely used, but often poorly understood

                                              I’m not getting much light from your comments on the subject – you’ve made 2 claims of hype with no examples

                                              1. 2

                                                Here’s an example of recent hype https://www.codemag.com/Article/0607051/Introducing-Domain-Specific-Languages

                                                Here’s some hype from the year 2000 https://www.researchgate.net/publication/276951339_Domain-Specific_Languages

                                                Arguably the hype for 4GLs was the prior iteration of that specific hype.

                                                I’m not arguing that DSLs are bad - I’m saying that they’re one of the things on the roster of perfectly good things that periodically get trumpeted as the next big thing that will revolutionize computing. These hype cycles are characterized by attempts to make lots of DSLs when there isn’t a strong need for it or any real payoff to making a language rather than a library.

                                          2. 4

                                            I know it might sound a bit controversial, but the way I see it we need to reach a new level of abstraction in order for large-scale software development to be sustainable. Some people might say AI is the way forward, or some other new programming technique. Either way I don’t think we’ll get there by incrementally improving on the paradigms we have—in order to reach the next level we’ll have to drop some baggage on the way up.

                                            1. 4

                                              I mean, humans aren’t getting better at groking abstraction, so I don’t know that “new levels of abstraction” are the way forward. Personally, I suspect it means more rigor about the software development process–if you’re building a tall tower, maybe the base shouldn’t be built with a “move fast and break things” mentality.

                                              1. 3

                                                Groking abstractions isn’t the problem, at the end of the day abstractions are just making decisions for the users of an abstraction. Over-abstraction is the root of many maintainability woes IMO, the more a programmer knows what’s actually going on underneath the better, but only to the degree that it’s relevant.

                                              2. 3

                                                I’ve heard it before. DSLs have their place, and some people love them while others hate them. This is one of a rotating cast of concepts that you’ll eventually see rehyped in 10 years.

                                            1. 2

                                              Isnt encapsulation a feature of … any “function”? It isn’t typically a language feature of “data” in FP that a function might transform. The article seems to be scratching one’s left ear with one’s right hand to address this conundrum?

                                              1. 1

                                                Why? It was associated “en masse” primarily with “class diagrams” and to a much lesser degree “sequence diagrams, both largely associated with OO. Object orientation has been (without real justification) going out of fashion for some time. It is increasingly becoming a meme today that it is simply bad, obsolete, a mistake or a corporate conspiracy. One used by the ignorant who knows not the wonders of functional or procedural programming.

                                                1. 4

                                                  Sequence diagrams are great. I’ve used them in a number of ways outside OO, as well.

                                                  1. 2

                                                    Sequence diagrams might be the only formal diagram I still use. They’re especially helpful for explaining/documenting how several distinct systems interact.

                                                    1. 2

                                                      They can form the basis of a nice “fractal” / “zooming” exposition of how systems work, too. Sequence diagrams for the high-level system components, all the way down to API calls.

                                                      Years ago (2006?) I prototyped a thing that hooked .NET CLR debugging APIs to generate Visio sequence diagrams from running code. Even back then they were one of the few formal diagrams I used.

                                                  2. 2

                                                    Object orientation has been (without real justification) going out of fashion for some time. It is increasingly becoming a meme today that it is simply bad, obsolete, a mistake or a corporate conspiracy. One used by the ignorant who knows not the wonders of functional or procedural programming.

                                                    Citation needed.

                                                    I don’t doubt that people on programming forums often say things like that. I do doubt very much that their statements are actually representative of programming as a field.

                                                    1. 5

                                                      The statement is a bit too broad. A lot of OOP is conflated with inheritance.

                                                      In Java, unnecessary inheritance has been demonized as lasagna code, e.g., enterprise fizzbuzz. Java developers have been generally moving toward favoring composition over inheritence.

                                                      The Go programming language doesn’t support inheritance, and has poor (imo) support for polymorphism. The fact that Go has become fairly popular is probably the strongest evidence that OOP is in decline. Likewise, Rust has been the most loved programming language for several years in a row. Rust also has negligible support for inheritance (albeit great polymorphism).

                                                      As far as I can tell, the only OOP pillar that remains unassailed is encapsulation. Popular languages that lack strong support for encapsulation (e.g. Python) are generally criticized for this point.

                                                      1. 3

                                                        JavaScript has been eating the world lately, and I don’t think anyone would deny that it’s an object-oriented language, despite its lack of traditional inheritance. Go is popular, but I think it’s debatable whether Go’s popularity is attributable to rejection of object-oriented programming.

                                                        And most of your example links are, again, from the relatively insular world of things written on/by/for programming forums, which I continue to claim are not representative.

                                                  1. 2

                                                    I think this is a little too limited to the minutiae and plumbing of programming, and should instead consider some progress from our current (old) design methods of functional decomposition, structured analysis, and data modeling (OO modeling seems to no longer exist).

                                                    Outside of the domains of computing and the data sciences, live the systems that make the world work. The systems that manage money, commerce, taxes, insurances, transport, health, education, scheduling and environment. How to model these systems with some predictability into code is also a programming breakthrough needed?

                                                    1. 3

                                                      Sadly I think not really, if being able to produce outcomes predictably is an important industry aim.

                                                      As evidence for that, we are no more capable of producing complex software successfully outside the domain of computing and the data sciences, than when I started in computing over 40 years ago. When I say successfully, I mean on time, on budget, and to specification. These are what stakeholders really understand, not the “acceptable on release” standard snuck into the small print of modern large scale undertakings. The failure rate for large, complex commercial, government, and public sector projects remains as high as when I started.

                                                      What we instead have learned is a cycle of:- adopt enthusiastically as a silver bullet; discover limitations; discard: forget; rediscover as new under a new name - with a particular focus on languages, platforms, technologies and the minutiae of computing. These after all are the thing programmers can control… not success.

                                                      1. 3

                                                        When I say successfully, I mean on time, on budget, and to specification.

                                                        That last point is the key one, I think. Even if software developers have learned a lot, it won’t ultimately move the needle all that much in the face of vague, internally-inconsistent specifications. Sometimes project failures are technical failures, there’s no denying it, but in my experience, the majority of failed projects are victims of institutional inertia and indecision and inability to articulate requirements with the necessary precision and thoroughness. Many projects are doomed at the stakeholders’ level, and programmers are pretty much powerless to do anything about that.

                                                      1. 2

                                                        Thought this was a good presentation. However, I was waiting for the presenter to say that like JavaScript, the Dart runtime is single-threaded, event-loop based. One had better like async-only programming. This limits its applicability to certain types of “systems” programming. Its performance generally at runtime, falls between Ruby/Python and Go, but generally closer to the former than the latter.

                                                        1. 4

                                                          How refreshing to read about aspects of modeling a domain, rather than just technologies, frameworks, computing infrastructure, and more technologies.

                                                          One of the realities of software development is that domain experts often do not know what they want. Iterative development based on feedback is intended (in an ideal world) to address that.

                                                          1. 2

                                                            At the boundaries, applications are themselves objects, and the JSON you use to communicate with them are immutable messages. They, if we’re also talking (micro)services, are the closest thing to what Alan Kay envisioned as “objects”. (And similarly what exists in Erlang, if I’m not mistaken.)

                                                            Also, as a point:

                                                            While an object is data with behaviour, closures are behaviour with data. Such first-class values cross system boundaries with the same ease as objects do: Not very well.

                                                            Yes, well, state being bound up in a function would be a problem, but otherwise a pure stateless function is going to cross just fine. Assuming it executes in a limited context that has its dependencies. (Alternatively, you could transform said function into a single function of all its dependencies inlined.)

                                                            And, a function can include local data, as static information within itself. Of course, none of this is literally an object/closure with running state, but, if we wanted to get pedantic, we could totally send an image of a running process or module and spin it up on the other end, like communication via passing Lisp images, but that’s really stretching the imagination… or maybe an amusing thought exercise… At this point we’re reaching similar modeling of the biological world, and maybe Alan Kay, being a biologist, would find this amusing.

                                                            1. 1

                                                              Assuming it executes in a limited context that has its dependencies.

                                                              If we’re going to assume the remote has the right dependencies, why not just assume that the remote has the stateless function as well and just send the data that parameterizes the function? That’s basically how RPC works today (and has worked for a long time). It doesn’t seem very useful to be able to marshal functions but only to remotes which have the dependencies pre-installed. It seems like we shouldn’t assume dependencies are installed and either do our normal RPC stuff or we come up with a scheme for marshaling the dependencies over the wire as well (perhaps with some caching so we’re not marshaling them to remotes that already have them).

                                                              Moreover, regardless of whether we’re sending dependencies or not, sending stateless functions isn’t particularly easy in a natively compiled language because you’re assuming the remote has the same architecture and a compatible kernel (or libc in many cases). So you need to either need to enforce that invariant or else you need to keep a platform agnostic version of the function (e.g., source code or VM bytecode) and ship that which means some kind of compiler/runtime on the remote.

                                                              All of this seems to support the “not very well” characterization of marshaling closures and objects.

                                                              1. 1

                                                                Of course the inventors of object-orientation would likely take issue with your deduction that microservices, immutable messages and Erlang best constitute their concepts (it wasn’t Alan Kay). This is taking the magical “it’s all about messaging” to an absurd extreme.

                                                                are the closest thing to what Alan Kay envisioned as “objects”

                                                                This implication that Alan Kay somehow invented objects is incorrect, yet disturbingly becoming “the new truth”. I think such historical revisionism should not go uncommented.

                                                                1. 1

                                                                  Would you care to set the record straight? The version of the story I know involves Scheme and actors, but also contains immutable messages, Erlang, and microservices (in that order). It’s not just about messages, but about the realization that stateless code can be packed into messages. Note that Wikipedia lists Kay in their history of object-oriented programming.

                                                                  1. 1

                                                                    Rather than repeat the historical record, perhaps a place to start the historical journey is the following quote by Alan Kay…

                                                                    “I don’t think I invented “Object-oriented” but more or less “noticed” what was really powerful about just making everything from complete computers communicating with non-command messages. This was all chronicled in the HOPL II chapter I wrote “The Early History of Smalltalk”.

                                                                    1. This raises the question of who did invent it?
                                                                    2. What exactly did Smalltalk add in terms of object orientation that did not already exist, other than “objects all the way down”?
                                                                    3. Should object orientation be therefore viewed by an Alan Kay perspective of “it’s about messaging”, particularly because his context was effectively limited to the language, platform and environment that is Smalltalk?
                                                                    1. 1

                                                                      Wow, so let’s see if I interpreted this chain of replies correctly:

                                                                      • “Wow, you’re wrong. You’re so wrong, it’s disturbing how wrong you are.”
                                                                      • “Okay, well where are we all wrong?”
                                                                      • “Oh, such impressive knowledge took me great effort to learn by trawling historic tomes by my lonesome, and you yourself should put in the same effort I did if you wish to become as enlightened as I.”

                                                                      If so, I’m just going to pass on this conversation, and note, that at no point did I even claim Alan Kay invented OO, so I don’t understand what prompted this pedantic anti-Kay/messaging reply.

                                                              1. 1

                                                                Perhaps a point of contention is that there seems to be an assumption that “applications” primarily exist to transform data and transfer it between them. That is a surely a narrow viewpoint? Given this, I think I understand the (current) attraction to the functional paradigm, as it particularly suits this assumption?

                                                                1. 2

                                                                  We want to acknowledge the outage…

                                                                  An actual apology would have been more appropriate.

                                                                  1. 8

                                                                    I’m an Atlassian employee.

                                                                    One of the co-founders, Scott Farquhar, did send out an apology to affected customers but it was tricky to even get a list of people to contact, since the contact information was deleted as part of the script. It was definitely incomplete initially.

                                                                    I probably can’t say much, but there was confusion during the incident over who had been contacted, who should be contacted, how to contact people, etc. due to the initially missing data. Some huge changes will be implemented around contacting customers in the future - we have to do better.

                                                                  1. 1

                                                                    It is common to lump all application code into one type. That leads to the mistaken logic that a particular concept that is bad for one, is bad for all.

                                                                    While true that singletons are (typically) bad when object modeling a problem domain or Abstract Data Type, they can be perfectly applicable to application infrastructure around that code, or within a development/runtime framework.

                                                                    1. 2

                                                                      The real reason is that it doesn’t well support the decomposition of a complex problem into individual parts that can be independently written, maintained, reused, and then recombined into an application. The callable *procedure” or “subroutine” does.

                                                                      1. 4

                                                                        My view is that to attempt to define abstraction in strictly computing terms is problematic. Part of that problem is that mechanisms of programming like the module, class, object, inheritance, composition, ADT, procedure and function, are *means” to support the act of abstraction. To derive a definition of abstraction from them is nonsensical.

                                                                        Consider instead that to abstract is to create concepts to capture the complexity of the world around us. It is a core human process.

                                                                        A concept is a generalized idea of a collection of phenomena, based on knowledge of common properties of instances in the collection. A phenomenon is a thing that has definite, individual existence in reality or in the mind. Concepts themselves are formally described in terms of their intension, extension and designation. Furthermore there are two typical viewpoints: the Aristotelian and the Prototypical.

                                                                        1. 1

                                                                          I think a more accurate title would be “Some benefits of simple software technical architectures”. I’m old enough to remember when there were choices as to how one “architected” functional requirements and domain constraints, not just non-functional requirements.

                                                                          A simple “crud” app that adds and subtracts numbers? 70 engineers for that? Sorry I call that ball out of bounds! I suggest that there actually is complex “behavior” beyond “create, read, update and delete” as well as very complex constraints on that behavior. Personally I’d love to know how those are modeled and implemented.

                                                                          That behavioural. complexity can certainly be implemented on top of a simple technical architecture rather than a complex one, but I do think that “technical” is a word required.

                                                                          1. 1

                                                                            For most of my career there was no such thing as “code reviews”. Then, their introduction in large public codebases was warranted, given code submission from unknown people and sources. That seems to have now been extended to effectively all development, particularly for a new generation of developers that know no different.

                                                                            Internal professional developers (both employees and contractors) used to be considered competent to write and introduce code for both features and bugs. These days that just doesn’t seem to be the case? Are developers less competent, less trustworthy or something else these days?

                                                                            1. 5

                                                                              Code reviews are not just about finding bugs. Or even mostly about that! Their primary purpose is education: to make sure more than 1 person understands how a given bit of code works, and is capable of maintaining it.

                                                                              edit: or, what edudobay said ;)

                                                                              1. 3

                                                                                Though there is criticism about code review in professional software development (was it blindly taken from open source, without considering that professional teams interact differently?), and in some environments it might also be seen as mitigation for lack of trust or competence, code review (in any form it might take) is a valid way for communicating and sharing knowledge in a team.

                                                                                1. 1

                                                                                  Code that I put out for review is better than code I checkin without review, even before the code review happens.

                                                                                1. 1

                                                                                  I thought to try Librewolf, but sadly the OS prevents it being run after installation for security reasons (unsigned), on modern Macs.

                                                                                  1. 4

                                                                                    Praying for Ukraine.

                                                                                    1. 2

                                                                                      I’m amused and sad that In my long career, we have moved (40 years ago) from the then problem of how to remove the unmaintainable mess that stored procedures inevitably become, to today promoting they be applied again.

                                                                                      Forget, rediscover, rename, silver bullet adoption, relearn limitations, discard, and repeat. The “science” of our industry.

                                                                                      1. 3

                                                                                        I’ve been keeping everything in PostgreSQL functions for 7 years now, for all of my web apps, and I’ve never found any problem with it.

                                                                                        Ever since I first started posting about it in 2015 - https://sive.rs/pg - I’ve heard some commenters say this same thing you’re saying, but never any concrete explanation of why this is such a problem now, in 2022, in PostgreSQL.

                                                                                        I think stored procedures must have really traumatized some people 40 years ago, but maybe the problems with it then are not still problems now.