It’s great to see Sidekiq still going strong after all these years! The iteration feature is really interesting, and I can think of many situations where it would have come in very handy but I had to reach for a more complicated solution. Don’t think I’ve seen that in similar libraries.
Every big Rails team I have worked on ends up building their own janky version of it (I’ve built a couple), very nice to have Sidekiq provide an implementation.
This looks great. I really like the fact that SQL/PGQ is just a thin layer (essentially syntactic sugar, like views) on top of regular relational tables. I’ve been eyeing AGE for years, but didn’t like that graphs are completely separate objects from tables. I guess that approach is or could be more performant, and it might be nice to have full-blown graph capabilities inside of Postgres instead of deploying a separate graph database system. But for my use cases, I’m not looking to host massive graph data, but just want to be able to treat relational data as a graph without multiple CTEs - which is exactly what SQL/PGQ does. Looking forward to this getting stabilized and released!
Ruby’s use of select comes from Smalltalk, which has a somewhat cute set of standard collection iterator methods.
I wasn’t familiar with LINQ, but it looks a lot like a rationalized SQL, putting the select clause at the end where it makes better sense. The omission of SQL from the philology is a bit surprising, although I guess it’s not considered a “programming language”.
There’s more cuteness behind those names: they came from Arlo Guthrie’s hit Alice’s Restaurant:
Came to talk about the draft: They got a building down New York City, it’s called Whitehall Street, where you walk in, you get injected, inspected, detected, infected, neglected and selected.
Alice May Brock, whom the song was named after, passed away in November 2024
Yeah, LINQ is really interesting. It lets you write queries in a fairly clean generic syntax that converts to an API that different back ends can use, including of course ones that transpile to SQL.
I would like a program that uses openstreetmap data, such as building heights and forest areas, as well as calculated position of the sun, to find good circular paths for taking a walk that lets you see the sun a bit. I live far north among tall trees and buildings and sometimes it’s quite a challenge to see the sun before it sets.
That’s a great idea, actually! I’d like such a tool, too! I’ve recently heard about OpenPV which is a german open-source web-based tool to calculate suitable positions to place PV panels and expected power generation. They rely on much the same data: building heights and vegetation, but OSM data wasn’t good enough for them. Still, maybe one could reuse some of their code.
Scaling out and distributing the workload is easier with application code. It’s just a matter of throwing more containers or servers to keep up with demand.
Of course, this depends on your infrastructure, but setting up a couple of read replicas is not especially hard.
Nowadays, it’s common to have databases managed by the cloud service provider that run on ridiculously anemic VMs
Well… don’t?
I do appreciate the “Other Views” section! And I agree with those “other views”.
Also I’d add: The chances of the SQL execution engine having bugs is probably a lot less than your application code will have. So as long as you get your SQL query correct, the chances of having bugs is much lower than if you write it yourself.
Once you have something like PgTAP so you can test your SQL queries and DB, you are in amazing shape for generic well-tested code that is very unlikely to break over time.
I agree that unless you know SQL and your DB, it can be hard to reason about how it handles performance. In your own code, that would be easier, since you wrote it and (hopefully) understand it’s performance characteristics.
I would argue bug-free predictable code is more important than performant code. In my experience, when performance starts to matter is also when you can generally start to dedicate resources specific to performance problems. I.e. you can hire a DB expert in your particular DB and application domain.
I’m not with you on the ‘hard to test locally, since large data is required’ part though. You usually get access to something like PG’s ANALYZE, where you can verify the right indexes are being used for a given query. Sure you might need some large test instance to figure out what the right indexes are, but that’s a development task, not a testing task right? Once you figure it out, you can write a test that ensures the right indexes are being followed for a given query, which is the important part to test, from my perspective.
I have also encountered people (online) who didn’t know that you could render web pages on the server.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
People learn by imitation, and that’s 100% necessary in software, so it’s not too surprising. But yeah this is not a good way to do it.
The web is objectively worse that way, i.e. if you have ever seen an non-technical older person trying to navigate the little pop-out hamburger menus and so forth. Or a person without a fast mobile connection.
If I look back a couple decades, when I used Windows, I definitely remember that shell and FTP were barriers to web publishing. It was hard to figure out where to put stuff, and how to look at it.
And just synchronizing it was a problem
PHP was also something I never understood until I learned shell :-) I can see how JavaScript is more natural than PHP for some, even though PHP lets you render on the server.
That’s not completely true, one beautiful JSX thing is that any JSX HTML node is a value, you can use all your language at your disposal to create that HTML. Most backend server frameworks use templating instead. For most cases both are equivalent, but sometimes being able to put HTML values in lists and dictionaries and have the full power of the language do come in handy.
Well, that’s exactly what your OP said. Here is an example in Scala. It’s the same style. It’s not like this was invented by react or even by frontend libraries.
In fact, Scalatags for example is even better than JSX because it is really just values and doesn’t even need any special syntax preprocessor. It is pure, plain Scala code.
Most backend server frameworks use templating instead.
Maybe, but then just pick a better one? OP said “many” and not “most”.
Fine, I misread many as most as I had just woken up. But I’m a bit baffled that a post saying “pick a better one” for any engineering topic has been upvoted like this. Let me go to my team and let them know that we should migrate all our Ruby app to Scala so we can get ScalaTags. JSX was the first such exposure of a values-based HTML builder for mainstream use, you and your brother comment talk about Scala and Lisp as examples, two very niche languages.
When did I say that this was invented by React? I’m just saying that you can use JSX both on front and back which makes it useful for generating HTML. Your post, your sibling and the OP just sound slightly butthurt at Javascript for some reason, and it’s not my favourite language by any stretch of the imagination, but when someone says “JSX is a good way to generate HTML” and the response is “well, other languages have similar things as well”, I just find that as arguing in bad faith and not trying to bring anything constructive to the conversation, same as the rest of the thread.
Let me go to my team and let them know that we should migrate all our Ruby app to Scala so we can get ScalaTags.
But the point is that you wouldn’t have to - you could use a Ruby workalike, or implement one yourself. Something like Markaby is exactly that. Just take these good ideas from other languages and use it in yours.
Anyway, it sounds like we are in agreement that this would be better than just adopting JavaScript just because it is one of the few non-niche languages which happens to have such a language-oriented support for tags-as-objects like JSX.
I found that after all I prefer to write what is going to end up as HTML in something that looks as much as HTML as possible. I have tried the it’s-just-pure-data-and-functions approach (mostly with elm-ui, which replaces both HTML and CSS), but in the end I don’t like context switching it forces on my brain. HTML templates with as much strong checks as possible is my current preference. (Of course, it’s excellent if you can also hook into the HTML as a data structure to do manipulations at some stage.)
For doing JSX (along with other frameworks) on the backend, Astro is excellent.
That’s fair. There’s advantages and disadvantages when it comes to emulating the syntax of a target language in the host language. I also find JSX not too bad - however, one has to learn it first which definitely is a lot of overhead, we just tend to forget that once we have learned and used it for a long time.
In the Lisp world, it is super common to represent HTML/XML elements as lists. There’s nothing more natural than performing list operations in Lisp (after all, Lisp stands for LISt Processing). I don’t know how old this is, but it certainly predates React and JSX (Scheme’s SXML has been around since at least the early naughts).
Yeah, JSX is just a weird specialized language for quasiquotation of one specific kind of data that requires an additional compilation step. At least it’s not string templating, I guess…
Every mainstream as well as many niche languages have libraries that build HTML as pure values in your language itself, allowing the full power of the language to be used–defining functions, using control flow syntax, and so on. I predict this approach will become even more popular over time as server-driven apps have a resurgence.
I am not a web developer at all, and do not keep up with web trends, so the first time I heard the term “server-side rendering” I was fascinated and mildly horrified. How were servers rendering a web page? Were they rendering to a PNG and sending that to the browser to display?
I must say I was rather disappointed to learn that server-side rendering just means that the server sends HTML, which is rather anticlimactic, though much more sane than sending a PNG. (I still don’t understand why HTML is considered “rendering” given that the browser very much still needs to render it to a visual form, but that’s neither here nor there.)
(I still don’t understand why HTML is considered “rendering” given that the browser very much still needs to render it to a visual form, but that’s neither here nor there.)
The Bible says “Render unto Caesar the things that are Caesar’s, and unto God the things that are God’s”. Adapting that to today’s question: “Render unto the Screen the things that are the Screen’s, and unto the Browser the things that are the Browser’s”. The screen works in images, the browser works in html. Therefore, you render unto the Browser the HTML. Thus saith the Blasphemy.
(so personally I also think it is an overused word and it sounds silly to me, but the dictionary definition of the word “render” is to extract, convert, deliver, submit, etc. So this use is perfectly inline with the definition and with centuries of usage irl so i can’t complain too much really.)
I don’t think it would be a particularly useful distinction to make; as others said you generally “render” HTML when you turn a templated file into valid HTML, or when you generate HTML from another arbitrary format. You could also use “materialize” if you’re writing it to a file, or you could make the argument that it’s compilers all the way down, but IMO that would be splitting hairs.
I’m reminded of the “transpiler vs compiler” ‘debate’, which is also all a bit academic (or rather, the total opposite; vibe-y and whatever/who cares!).
Technically true, but in the context of websites, “render” is almost always used in a certain way. Using it in a different way renders the optimizations my brain is doing useless.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
Unless it was being rendered on the client, I don’t see what’s wrong with that. JSX and React were basically the templating language they were using. There’s no reason that setup cannot be fully server-generated and served as static HTML, and they could use any of the thousands of react components out there.
Yeah if you’re using it as a static site generator, it could be perfectly fine
I don’t have a link handy, but the site I was thinking about had a big janky template with pop-out hamburger menus, so it was definitely being rendered client side. It was slow and big.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
I’m hoping tools like Astro (and other JS frameworks w/ SSR) can shed this baggage. Astro will render your React components to static HTML by default, with normal clientside loading being done on a per-component basis (at least within the initial Astro component you’re calling the React component from).
I’m not sure I would call a custom file type .astro that renders TS, JSX, and MD/X split by a frontmatter “shedding baggage”. In fact, I think we could argue that astro is a symptom of the exact same problem you are illustrating from that quote.
That framework is going to die the same way that Next.JS will: death by a thousand features.
huh couldn’t disagree more. Astro is specifically fixing the issue quoted: now you can just make a React app and your baseline perf for your static bits is now the same as any other framework. The baggage I’m referring to would be the awful SSG frameworks I’ve used that are more difficult to hold correctly than Astro, and of course require plenty of other file types to do what .astro does (.rb, .py, .yml, etc.). The State of JS survey seems to indicate that people are sharing my sentiments (Astro has a retention rate of 94%, the highest of the “metaframeworks”).
I don’t know if I could nail what dictates the whims of devs, but I know it isn’t feature count. If Next dies, it will be because some framework with SSG, SSR, ISR, PPR, RSC, and a dozen more acronym’d features replaced it. (And because OpenNext still isn’t finished.)
Astro’s design is quite nice. It’s flexing into SSR web apps pretty nicely I’d say. The way they organize things isn’t always my favorite, but if it means I can avoid ever touching NextJS I’m happy.
The post is to educate, gets me no money, has no ads, is not related to my employer and not endorsing any products. I did check the box that I am the author. If you want to keep me as a member, please be more welcoming to me. Thank you.
Self-promotion: It’s great to have authors participate in the community, but not to exploit it as a write-only tool for product announcements or driving traffic to their work. As a rule of thumb, self-promo should be less than a quarter of one’s stories and comments.
This rule applies to everyone, even if you’re not making money off the posts.
The rule is enforced so inconsistently. Posts with more contention get the rulebook thrown at them, while people like https://lobste.rs/~stapelberg/ or even myself get away with it.
As someone that feels discouraged to submit my own work because of this rule, I don’t find this argument to be compelling. What I get from this is, “as long as you contribute to the community in other ways, you can totally break these rules anytime you want, within reason, I guess.” That seems a bit silly to me.
It’s not “you can break these rules anytime you want, within reason” so much as “the rule is not to post the vast majority of time your own stuff”. Any reasonable amount of other interaction means you are following that custom. It’s not some big thing, as much as occasional folks kvetch about it.
That custom exists for a reason: people see Lobsters as a marketing or advertising channel and treat it that way. It has been abused in the past. It appears to be being abused in the present. It will doubtless be abused in the future. That custom and vigorous flagging–when people notice!–are one of the few things we have going for us.
You’re new (5 months according to your profile at time of writing), so maybe you haven’t seen this before and it all seems rather arbitrary–so, I’ll try and explain:
Lobsters is a slow-moving site (compared to, say, the orange site or a decently popular subreddit). Posts here stay on the frontpage for a day or two easily. In advertising terms, a slot here stays up for a while and gets a lot of impressions by a pretty well-defined (and valuable!) audience. I have heard valuations in the low 5 figures. This makes us a target for growth hackers (and the modlog is full of efforts to kick them out when they are discovered).
We could ban self-promotion altogether, but it turns out that sometimes people do write interesting things themselves and would like to share them. We could always allow self-promotion, but that is basically incompatible with trying to prevent growth-hacking and marketing.
So, instead, we make the compromise of “If you post here and mostly bring in other interesting things, and occasionally your own stuff, that’s probably okay”. This means that anybody who actually is in it just to shill their own work must also do what amounts to community service, and that tends to give an easy way of spotting growth hackers and free riders.
I think it’s weird that you point out that I comment, but haven’t submitted anything, as if that has any impact on my argument here. Likewise, having a newer account doesn’t bar me from having an opinion or disagreeing with someone else. I’m clearly not some free loading self promoter. I’m a real person that just happens to disagree with that comment.
I’m not disagreeing with the custom. The notion that it’s not always applied fairly does resonate with me, though.
The rule is not “only 25% of your submissions can be your own”, its “25% of stories and comments”, so what you call “other ways” is explicitly part of it. But yes, I’m arguing that there is a fundamental difference between “doesn’t participate at all outside own submissions” (which is explicitly called out as what the rule wants to prevent) and “does participate regularly, but maybe doesn’t meet the rule of thumb threshold”.
Agree. I think these warnings are something that could and should be automated instead of people haphazardly being chastised. Most guidelines aren’t easy to automate, but this one would be.
(And I definitely think it’s a good rule!)
If an exception is made for some people who only come here to post releases/links that a lot of people are interested in (lobste.rs/~aphyr comes to mind), then that can be explicitly flagged instead of them being unofficial royalty.
“Friendlysock, this isn’t terribly friendly at all!”–okay, sure, but you’re engaged in behavior that is indistinguishable from a long list of bad-faith and near-bad-faith actors. I’m sure lots of them have similar sob stories if given the opportunity.
In order to avoid this, for what it’s worth, all you have to do is comment a few times in other stories, submit interesting learnings that you didn’t write, and not be a shithead. It isn’t a particularly high bar, and if you find it difficult to clear there are many other communities that you’d doubtless find more welcoming.
The idea is to discourage folks who are here only to submit their own work, and only to participate in comments on their own stories — it’s a community, not a publication channel. This behaviour is unwelcome.
Regarding relativity: it says we can ignore it, with a footnote:
IF YOU ARE PROGRAMMING SPACECRAFT, or maybe anything that communicates with spacecraft, please set this guide down carefully.
However, I’d also point out that relativity is appropriate when dealing with distributed systems; e.g. no universal clock, observers disagreeing on the order of events, etc. I’m not saying we should take Lorentz transforms of timestamps, or whatever; but that it gives the right intuition (there is no preferred observer/reference frame, communication latency is the defining feature of spatial separation, causality is a fundamental relation agreed by all observers, etc.), rather than trying to patch-up a newtonian intuition (e.g. trying to substract latency from timestamps to get the “real” time, or comparing sequences of events with a re-ordering tolerance of 1 second, etc.)
That is such a classic paper. I did my MSc looking at event tracing in distributed systems (in the days way before OpenTelemetry and all of the APM companies) and that paper kind of blew up my notion of what kinds of problems I needed to solve. In undergrad I’d taken a Modern Physics course so I was already mentally primed for ideas like the future light cone and being unable to define a total ordering of events when there’s spatial separation but that paper just drove it home as highly relevant to the work I was doing.
This is one of the things I love about Ruby. Most of the syntax is just sugar for a method call (or, technically, sending a message), and with the ability to monkeypatch everything you can rewrite the language to suit your fancy within Ruby itself!
What’s cool about it is that it’s not like a Lisp or Rust or C or other languages that let you define macros which require a separate pass in the compiler to expand them (although in Lisps the line between macro and language construct is very blurred by virtue of the syntax) or like a Forth where you have to define parsers for new words; Ruby has a well-defined syntax and no macro system, so it’s actually more similar to defining __enter__ and __exit__ in Python to make a class compatible with the with syntax. It’s just that Ruby has syntax sugar for method calls where other languages would have custom language constructs, so you can often fall back to the “core language” to redefine what a particular bit of syntax desugars to.
Languages often have these bits of syntax or other features that are linked to a restricted set of types and that you can’t instrument other types to use them with (like ? in Rust, which currently only works on Result and Option instead of definining a Try trait that you can implement on other types, or Elm’s special type classes like “number” that are baked into the compiler, which probably take after SML’s arithmetic functions that work on both int and real, etc). I like to call these things “compiler magic”, and I think it’s nice that Ruby has very little compiler magic.
I don’t think that’s quite right. That sounds similar to Common Lisp macros, but less powerful.
A macro in Common Lisp is a function that runs at compile (or interpret) time and transforms one block of code into another block of code before actually compiling or interpretting it.
It boils down to the same thing in a purely interpretted language like Ruby because “runtime” and “compile time” (or “interpretation time” in Ruby, I guess), but makes a major difference when compiling.
The problem is that such articles have to claim superiority by denigrating another paradigm with examples and code that shows poor practice.
People like FP because it is well suited to the transformation of data, and for many programmers that is all there is. Try modeling a complex payroll system into code using such a transformation of data approach. Enterprises tried that for over a decade during the “structured analysis” era, but as “engineers” we all know that don’t we?
I have no idea what you are alluding to but I am honestly very intrigued. As someone who likes functional programming and complex enterprise systems I would definitely like to hear your thoughts/experiences regarding why FP might not be a good match for enterprise software development. (I do know about structured design, if you’re referring to the Yourdon/Constantine sort of thing. But not sure how it’s connected here.)
As for the reasons why people like FP, I agree transformation of data is one factor, but I would say it’s “pick 3-5 out of:”
referential transparency
immutability
higher-order functions
pattern matching
“wholemeal programming”
comprehensive type checking
type-level programming
freedom through constraints
Most of those can be found in non-FP languages, but when you have a critical number of them come together in a cohesive way, that certain magic starts to happen. There is no single definition that fits all languages that are commonly categorized under the FP label. If I were to pick one, I would say “high-order functions as the default and most convenient way of doing everything”.
Most of those can be found in non-FP languages, but when you have a critical number of them come together in a cohesive way, that certain magic starts to happen.
Indeed. You can do FP in most languages, and you can do OOP in most languages (even C!), but if the language is designed and optimized for certain styles/patterns of programming, then you’re usually giving up both convenience/ergonomics and performance by going against the grain. Case in point: JavaScript devs trying to lean into FP patterns and making copious, unnecessary, copies of local arrays and objects just for the “immutable-only cred” or writing a “curried” functions when that means actually creating tons of temporary Function objects that need to be garbage collected at runtime (as opposed to languages like OCaml that are able to optimize these things so well that curried functions mostly don’t have any runtime overhead compared to uncurried functions).
But, I will say this: I really don’t love FP (as in the “use mostly pure functions to just transform data” definition) for IO-heavy applications (really, almost anything with a DBMS underneath). Doing “proper” FP style is often the antithesis of “mechanical sympathy”, meaning we have to choose between writing elegant data transformation code and writing code with reasonable performance characteristics.
For example, it’s extremely common when working with a SQL database to open a transaction, read some data, make a decision based on that data, and then conditionally read or write more data. To implement this in conventional FP style, you have basically two options:
Break up this unit of logic into multiple pure functions and have your top-level handler piece them together like IO -> pure function 1 -> IO -> pure function 2 -> IO. That’s fine if your pure functions actually make sense to be separate named operations. Otherwise, you’re breaking something that is semantically a single business operation into multiple pieces that don’t actually mean anything in your domain.
Read ALL of the data you could possibly need upfront and then feed it into the pure function before writing the final result. This is obviously a bad idea in any system that is supposed to be able to scale. Doing extra database queries when you might not even use the results should send you straight to jail.
Machines don’t remember, they simulate memory. They don’t recognize things, they simulate recognition. They don’t think, they simulate thinking. Externally these processes appear similar to a degree, as there are functional overlaps. But for the machine, all there is is the outside. There is nothing inside, nothing from the machine’s point of view, nothing that it is like to be a machine. You will never get from simulated thought, simulated memory, simulated feeling to actual consciousness. At best, you’ll create something that is better at fooling more people.
This line of argument will never be convincing until the development of an external method to establish that a given person or creature has internal experience. For now we have our own internal experience and verbal attestation.
That is exactly the argument. You know that you have internal experience. If you (and everyone else) didn’t, the issue would never arise. No-one would every come up with the weird question of whether some machine (or whichever system) has internal experience unless we knew for a fact that we do. It’s not a concept that would pop up out of nowhere. There will never be a more convincing argument, and there will never be a method to establish objectively that some person/creature has it.
Well, one thing I do know from my limited knowledge of what buddhism claims is that we do not have consciousness or internal experience all the time. We will instead experience moments of self-awareness and then back-project that this was always the case during whatever task we were doing for the past length of time. Once you get some meditation training you can catch yourself doing this. It is reasonable to say that I am not experiencing self-awareness right now since I am not in one of those moments, even though I am currently writing about it. But clearly I am acting and typing as though I have internal experience in this moment. Why could the same not be true for a machine, or an animal? Why could it not also have moments of self awareness?
It’s less “why isn’t it possible” and more “how do we even get to that point”. It’s not about “could it be done” and more “what does the path to arrive at the point where we’ve done it even look like”.
To me, a lot of GAI discussion feels like talking about the future of knee surgery before the invention of germ theory. I think there are thousands of “unknown unknowns” in methodology, thinking, reasoning, and theory on the journey to reach GAI and it’ll probably take about a couple of hundred years to unshake all of those as an optimistic estimate.
But that is all of modern medicine; our technology is far, far below the level of the crazy future nanomachines that run our body. Experimentation runs ahead of theory and people only have foggy theories about why various drugs work, especially psychiatric medications. Given that an unthinking process (evolution) produced us, it is totally conceivable that GAI could be produced through experimentation following a gradient of abilities without understanding what consciousness is. I’m not a “GAI is imminent” person but the idea that figuring this stuff out is a precondition to producing consciousness is not true; actually I think the only route to figuring this stuff out is through producing GAI.
But that is all of modern medicine; our technology is far, far below the level of the crazy future nanomachines that run our body. Experimentation runs ahead of theory and people only have foggy theories about why various drugs work, especially psychiatric medications.
That’s my point. We’re people from the 1400s discussing the future of online banking. We’re stone age people talking about car mechanics and industrial resin production.
the idea that figuring this stuff out is a precondition to producing consciousness is not true; actually I think the only route to figuring this stuff out is through producing GAI.
I mean, I don’t think we’ll have consciousness Totally Figured Out when/if we develop GAI, but there’s a long long road to where it is and I don’t think we can make any reasonable estimations other than “There are a lot of stepping stones to get there that totally change the surface of the pond”
My following comment feels massively off-topic now I’ve written it but: In Buddhist literature, “consciousness” (vijnana, also called discernment) is one of the five factors of clinging and always present, and one’s belief that they make up our “self” or “soul” is something that causes one’s suffering. There are multiple aspects of consciousness depending on school, i.e. “sense consciousness”, consciousness arising from the simulation of one of the senses, or “mind consciousness”, consciousness arising from mental factors, the “thinking” consciousness. It is my understanding of Buddhist philosophy that consciousness is always there in some lifetime, whether you are clinging to it, aware of it, heedful of it or not. But it is different to “self-awareness”, which might be better described as “consciousness of consciousness” or such.
we do not have consciousness or internal experience all the time
As noted in another comment, that is not what Buddhism claims. Mindfulness or meta-awareness (knowing that you know) might be rare states, but basic conscious experience is ever present. When you are thinking “I am not experiencing self-awareness right now”, you are aware. Without awareness there can be no thought, feeling, sense-impression etc.
I am acting and typing as though I have internal experience in this moment. Why could the same not be true for a machine, or an animal? Why could it not also have moments of self awareness?
Which is it? Are you merely acting as if you had internal experience? If so, yes clearly a machine can also act as if it had an internal experience (eg by outputting the string “I have an experience”). Or do you actually have internal experience? If so, can a machine have that? That’s a much more interesting but completely different question. (A lot of people disagree that it’s a different question, of course, but we won’t resolve that here.)
I’m pretty sure by this definition people don’t remember either, we simulate memory. Unless the precision and immutability of computers is the simulation of human memory which instead is a continuous reimagining of the past.
Anyway, yes almost completely agree. This must explain the sense of deep dread I feel when trying to learn a new kubernetes concept that makes me want to close the page and reach for a drink.
I might be tarred and feathered for saying this, but I believe that Kubernetes is a perfect example of what “write dumb code” looks like in the pathological case.
If your code is not able to represent the inherent complexity of the problem domain, that complexity goes somewhere. Since k8s is written in a language that is arguably too simple for the problem, the complexity has moved to the configuration. A language with a more sophisticated type system could produce much simpler configuration.
What does OOP even mean if not “class inheritance”? It seems like OOP proponents have broadened the definition over time to include data oriented and functional programming paradigms–I suspect the Java and C# that is written today, with its emphasis on functional concepts–would not have been considered “OOP” by practitioners in 2002. Moreover, what is the difference between “OOP with minimal class inheritance” and say idiomatic Go or Rust? It feels like “OOP” is too poorly defined to be useful (its definition expands and contracts depending on convenience), but maybe I’m mistaken here?
Inheritance, encapsulation, dynamic dispatch. OO languages started to incorporate more things but those have remained central them. Other languages have found other ways of implementing those concepts (traits, generics, unexported fields, etc) so it blurred the lines.
So if you write Java without inheritance, or if you use inheritance only as a mechanism to automatically delegate methods to some member (as a convenience, like Go’s struct embedding syntax sugar), is that considered OOP or no? If yes, then would Go and Rust be considered OOP, since they also support encapsulation and dynamic dispatch? Or does OOP require more emphasis on inheritance to distinguish it from other paradigms?
That static classification is not a good way to think about it IMO. Most languages can express different behaviors and you can program in more than one style in them. The general way in which one programs in them tells us a lot more about the language.
Before ALGOL languages didn’t really have functions/procedures, control flow, etc. Then languages like C and Pascal popularized it and became known as “procedural languages”. Now every single language has that, including OO languages, so are they all just procedural languages? I’d say no, because the style is just different and adds different ideas.
Languages evolve and absorb concepts in their own ways. Go and Rust have incorporated some of those OO ideas in their own way but it’s a different style. It’s hard to reduce most languages to a handful of concepts. Given enough time a concept that was once associated with “Functional” or “Object-Oriented” just become table stakes and everyone includes them.
Or does OOP require more emphasis on inheritance to distinguish it from other paradigms?
Yes, I’d say inheritance would be the most defining characteristic, along with the encapsulation and dynamic dispatch. Inheritance is mostly used to take advantage of the other two concepts, probably most people today recognize that the taxonomy part of inheritance is not really all that helpful.
That static classification is not a good way to think about it IMO. Most languages can express different behaviors and you can program in more than one style in them. The general way in which one programs in them tells us a lot more about the language.
I agree with this, and I probably should have taken more care to say something like “are idiomatic Go and Rust programs considered OOP”. I also agree that there’s some degree of orthogonality between these different paradigms, but if they’re completely orthogonal then the original claim that OOP is “the dominant model” seems meaningless.
Given enough time a concept that was once associated with “Functional” or “Object-Oriented” just become table stakes and everyone includes them
Sure, but again (to your earlier point) I’m less interested in the features available in languages and more how programs are written. If a program doesn’t use (or makes only superficial use of) inheritance but does use encapsulation and dynamic dispatch, then can we credibly say it’s an OOP program? If so, then I would argue that much of what is considered “good, modern Java and C#” is not really OOP, and that the original claim that OOP is “the dominant model” is largely incorrect.
And conversely, if we allow for such inheritance-eschewing programs to be considered “OOP”, then suddenly a lot of idiomatic Go, Rust, Haskell, OCaml, etc programs become “OOP” insofar as they avail themselves of encapsulation and dynamic dispatch. At that point it seems like “OOP” loses a lot of its meaning and thus its conceptual utility.
Before ALGOL languages didn’t really have functions/procedures, control flow, etc. Then languages like C and Pascal popularized it and became known as “procedural languages”.
I believe the exact term you’re looking for is “structured programming”. And all mainstream languages today are definitely structured.
Yes, the problem is exactly that OOP isn’t a coherent enough concept to be useful.
I think the best definition (hinted at elsewhere in the thread) below is “mash together inheritance, encapsulation, polymorphism, and perhaps message-passing” but nearly every discussion about how to build software would be improved if you stopped talking about OOP and talked instead about inheritance, encapsulation, polymorphism, and message-passing as distinct concepts that are useful in different places.
(Most discussions of OOP don’t mention message-passing at all, which is weird because the person who coined the term OOP actually meant it in a way that put messages first, but language changes with use and all, so I get it.)
Modularity: The program is separated into various modules, each comprising the definition of a data structure, and functions and procedures using (or operating on) that data structure. For instance: a C compilation unit, an ML module, or a Java class.
Instantiation: Basically the rejection of global variables, such that you can have several instances of the same data structure. Very useful for instance when you want several parsers in your program.
“Good”: Because indeed the common denominator of OOP doesn’t include much more than the above. Which isn’t much at all, considering the above is also present in procedural and functional programming. The definition of the day will always add ever changing details, depending on what is considered “good” at the time. If it’s good, you can be sure OOP will eventually steal it. If it’s not, it will get kicked out of being a fundamental part of OOP eventually.
The point is to make sure “OOP” never dies. It will change its own meaning to adapt to modern practices, and then look back at its made up longevity to be able to say it was right all along. Some things though are probably safe from ever changing though: modularity and instantiation for instance are so universally useful that they’ll never be kicked out.
I stopped worrying about naming programming paradigms years ago. Just try stuff and keep what works.
What does OOP even mean if not “class inheritance”?
Let’s ask the person who coined the term back in the ’60s: Alan Kay. He defined it as:
OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things
Of these, I think the only one that people would disagree with now is the last one. There are places for extreme late binding, but things like structural and algebraic type systems let you program as if you have extreme late binding but get the performance of early binding.
I mean, I’ve heard this before, but I’ve heard lots of OOP proponents reject that definition. When I was learning OOP back in the early 2000s, precisely none of the content I saw discussed message passing—it always emphasized classes and inheritance and architecting your software according to a “Kingdom of Nouns”. I’m fine with the definition, it just doesn’t match what I understand most other people to mean by the term.
There’s a lot of cargo-cult OO stuff. Some OO languages have feature X, therefore X must be the important bit! But most of it misses the point. Data hiding and loose coupling are the key parts. Inheritance is a bolted-on optional extra. Self had inheritance, but via (multiple) prototype chains and not as part of a type system. Languages like C++ and Java tried to improve performance by coupling implementation sharing with subtyping, which in hindsight was an absolute disaster.
Language follows usage. Just because one person used a term first doesn’t mean they get to define it forever. Kay’s definition is more useful than the popular one, but at this point things are so muddled that neither definition actually improves clear communication, and both should be abandoned.
Perhaps better to consider the viewpoints of those who actually invented it. Hint: it wasn’t Alan Kay. His viewpoint is entirely related to Smalltalk, which other than “objects all the way down” and meta-Lisp like concepts offered nothing new in terms of OO. Not objects, classes, functional polymorphism or inheritance. Not even object.method (dot) notation.
First and foremost, it’s about organization. The dominant – sometimes the sole – form of organization in the class – or in some cases the object, since some languages are not class-based. The organization is naturally around data structures, with behaviors tied to those data structures. A humorous look at this is the “kingdom of nouns” blog from years back: https://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html
With that organization style being natural and dominant, a major benefit is encapsulation. Instead of “code” being unorganized and free floating, it tends to be glued onto some relevant data structure (the combination is a class / object). Encapsulation allows for data hiding (usually a very good thing).
Composition takes many forms, and vary widely from language to language. Interface-based. “Pure virtual” classes. Traits and mixins. Aggregation. Delegation. But the mostly widely known form of composition is inheritance. The big question of “composition” in any programming language is: “How do I get exactly what I want, in a manner that will be obvious to the reader, without having to repeat myself?” And for class-based OO languages, that means “How do I reuse this ‘class’ thing as a whole?” Obviously, inheritance does exactly that. But so do traits and mixins. So do aggregates and type delegations.
Virtual behavior (virtual function calls) are a natural side-effect of some of these composition forms, because of the layering that occurs when classes are combined (inheritance, traits, mixins). Virtual behavior is extremely useful, but it does come at a cost. For example, in the performance realm, a virtual call has around a 29 clock cycle cost on x64 for Intel’s chips, plus a pointer chase (the virtual function table) which impacts CPU cache usage. While this has no measurable negative impact on a typical business application, it can easily have a significant impact on a low-level library.
It feels like “OOP” is too poorly defined to be useful
Like a lot of broad terms, the utility is not in the exact definition, but in the ability to provide general context. But I’m not sure what you’re expecting in terms of “useful” here. The term itself is just a means to communicate, among people who probably already know a bit about the topics being communicated.
Like a lot of broad terms, the utility is not in the exact definition, but in the ability to provide general context. But I’m not sure what you’re expecting in terms of “useful” here.
Every time “OOP” comes up, the majority of the comments are arguments about what it actually means.
This is not a phenomenon that happens with useful, clear terminology. It’s time to move on and use better words.
I suspect the Java and C# that is written today, with its emphasis on functional concepts–would not have been considered “OOP” by practitioners in 2002.
Object-oriented code that is occasionally spiced with lambdas is something completely different than code in language that allows only functions. The „emphasis“ means mostly that people talk rather about newly added features than ones that are present for decades. And it is mainly a syntactic sugar for equivalent classes and patterns. See also my other comment.
For some things perhaps, for others not. It remains dominant for teaching and representing Abstract Data Types. It remains dominant for the creation of development and runtime frameworks for platforms. It remains dominant for modeling a large proportion of simulations.
However, it is no longer dominant (sadly was it ever) for representing a business problem domain model. This was the real intention of the inventors of OO, particularly Kristen Nygaard. The modeling of colaborating phenomena and concepts of a referent system. The so called “Scandinavian school” viewpoint. Note that for my purposes just because one is using an OO language doesn’t equate to OO programming.
I think a great deal of confusion stemmed from the confusion of OO programming with OO modeling. The latter can be taken too far, but has obvious strong points. The former was an interesting idea that was quickly obfuscated. The best realization of (the best) OO ideas might be in Erlang, although they didn’t intend that, just as they didn’t intend to create a functional language.
I’ve been running essentially this setup in production for 1,5 years, with a Broadway application [1] accepting messages and delegating most of the work to Gleam code. Gleam is what finally got me to commit to the BEAM, because a strong type system is a big productivity booster for me, while the Elixir ecosystem (esp LiveView) is just fantastic and delightful.
However, note that mix_gleam is a bit rough and doesn’t work well with the Gleam language server. You won’t get proper LS feedback, and false compiler warnings, mostly due to Gleam not being able to detect the transitive dependencies handled by mix. A fix for this is to manually add transitive dependencies to the gleam.toml file, but that’s tedious. There is an open PR to semi-automate this [2].
It does seem to suffer from the same problem with recursive dependencies, however. (According to recent discussion on the Gleam discord server).
In short, the integration is not yet as tight as one might wish. A simpler approach is to set up a separate Gleam project and use it as a path dependency from Elixir.
Hi, I haven’t, but it’s on my todo list. Just had no opportunity, since most of my dev environments fit in a devshell flake. Would be happy to try it out and share my experience, as I am also a big fan of your work.
Yes - contributed too! I was successful in using it to replace Docker Compose for starting services in one of our big projects (yay!) but I have been waiting for the ability to create MongoDB initial accounts before I introduced it to my team, and haven’t had time to revisit since that landed.
I spent over a decade in ecommerce product configuration and customization. The logic was gnarly for most projects. BREs were a siren song. Fortunately, many programmers had made the mistake of using them ahead of us. Hearing misfortunes like this one was sobering. They merely move the problem of complex logic to a kind of poor-man’s DSL that only one or two people know how to use. That said, I do still wonder sometimes if Prolog would have suited some forms of constraint logic like this better than SQL and mainstream, general purpose languages. But the track record for Prolog seems… limited.
The problem with Prolog IME is that it’s almost too powerful and “elegant”. Prolog has two very impressive features:
Like a Lisp, Prolog is homoiconic—the only construct is a relation. Everything else—numbers, lists, etc—is just sugar on top of relations
Backtracking—since Prolog doesn’t differentiate between inputs and outputs, it’s possible to run programs in reverse
However, I’ve found that these two features sometimes make it hard to debug programs. The problem is that all Prolog programs that are syntactically valid are semantically valid as well—even if you attempted to do something “nonsensical”, like checking a list for equality against a a number, because all constructs bottom out at relations.
Backtracking suffers from the same issue (although not as bad); it’s extremely powerful but it can be difficult to debug and easy to introduce by accident.
I think these two factors are some of the reason we don’t see more Prolog “in the large”. That said, Prolog is a wonderful language and I recommend everyone to learn it regardless!
Like a Lisp, Prolog is homoiconic—the only construct is a relation. Everything else—numbers, lists, etc—is just sugar on top of relations
Those are orthogonal. Homoiconicity has nothing to do with whether there is only one construct. Lisp was never like that.
like checking a list for equality against a a number, because all constructs bottom out at relations.
Is there any dynamic language where comparing two values of different types is an error? Doing an equality check between a list and a number is perfectly fine in Python.
Is there any dynamic language where comparing two values of different types is an error? Doing an equality check between a list and a number is perfectly fine in Python.
You’re right, this was a bad example. A better example might be trying to get the first element out of something that isn’t a list, for example.
Do you think there is scope for doing this kind of system with datalog rather than prolog and would that avoid any problems that you see in a prolog-based system?
For context, I have some interest in https://www.biscuitsec.org/ where authorisation logic is expressed via datalog programs. Wondering if you have any insight?
For small programs I think both Prolog and Datalog would work really well. For “programming in the large” I think something like Mercury sounds very promising, but it’s still a very small, niche language. Verse seems kind of similar, but I’m not familiar enough with it to make any judgments.
As I mentioned in another comment, I think Answer Set Programming is pretty much in the sweet spot for these use cases. It has none of the footguns of Prolog (including backtracking), behaves more like a Datalog but with quite a bit more power, has very natural semantics, and acts pretty much as a black box oracle answering queries based on you rules.
One of my ongoing goals is to embed Prolog into other languages to improve uses-cases like this (so far the fruits of my effort are a Trealla Prolog wasm port with Go and JS embeddings). Currently the most annoying part is converting data back and forth, I’d like to experiment with some fancier FFI functionality. The SWI-Prolog wasm port has a neat JS FFI interface that I may steal.
I think Prolog is unfamiliar enough that a lot of devs aren’t comfortable with writing an entire project in it, but I hope to improve that familiarity by easing into adoption through rules engine/scripting-type usages. Personally I have been enjoying it a lot. Being able to query not just your DB but the code itself is an awesome superpower :)
In terms of syntax and also implementation heritage, Erlang directly descends from Prolog and has been wildly successful in the domains it was intended to be used for.
My experience tells me differently. At my previous job, for the team I was on, we had an ever evolving set of business logic while the rest of the system remained (largely) the same. Yet because we did not have a separate business_logic() function, testing wasn’t easy. The problem we had was a mixing of I/O and logic. It sucked and it didn’t have to be that way.
Not sure if you were being snarky, but rules engines don’t need to be enterprisey or based on XML. There are several lightweight alternatives. (Though my preference would be something from the logic programming world, such as Answer Set Programming, which has much clearer semantics.)
Controlled side effects via algebraic effects or monads seems like a good halfway house.
Most of my Haskell work tends toward determining the set of primitives needed to write the code the way that makes the most sense, and then encoding the rules in that way.
It’s great to see Sidekiq still going strong after all these years! The iteration feature is really interesting, and I can think of many situations where it would have come in very handy but I had to reach for a more complicated solution. Don’t think I’ve seen that in similar libraries.
It’s directly adapted from https://github.com/Shopify/job-iteration
Every big Rails team I have worked on ends up building their own janky version of it (I’ve built a couple), very nice to have Sidekiq provide an implementation.
This looks great. I really like the fact that SQL/PGQ is just a thin layer (essentially syntactic sugar, like views) on top of regular relational tables. I’ve been eyeing AGE for years, but didn’t like that graphs are completely separate objects from tables. I guess that approach is or could be more performant, and it might be nice to have full-blown graph capabilities inside of Postgres instead of deploying a separate graph database system. But for my use cases, I’m not looking to host massive graph data, but just want to be able to treat relational data as a graph without multiple CTEs - which is exactly what SQL/PGQ does. Looking forward to this getting stabilized and released!
The pgsql-hackers thread linked from the post has some interesting details: https://www.postgresql.org/message-id/flat/a855795d-e697-4fa5-8698-d20122126567%40eisentraut.org
Ruby’s use of
selectcomes from Smalltalk, which has a somewhat cute set of standard collection iterator methods.I wasn’t familiar with LINQ, but it looks a lot like a rationalized SQL, putting the select clause at the end where it makes better sense. The omission of SQL from the philology is a bit surprising, although I guess it’s not considered a “programming language”.
There’s more cuteness behind those names: they came from Arlo Guthrie’s hit Alice’s Restaurant:
Alice May Brock, whom the song was named after, passed away in November 2024
Sources:
Yeah, LINQ is really interesting. It lets you write queries in a fairly clean generic syntax that converts to an API that different back ends can use, including of course ones that transpile to SQL.
I would like a program that uses openstreetmap data, such as building heights and forest areas, as well as calculated position of the sun, to find good circular paths for taking a walk that lets you see the sun a bit. I live far north among tall trees and buildings and sometimes it’s quite a challenge to see the sun before it sets.
Shademap seems to do about half of what you want: shademap.app
this is amazing, thanks for the link
Very interesting! Thanks for the tip!
That’s a great idea, actually! I’d like such a tool, too! I’ve recently heard about OpenPV which is a german open-source web-based tool to calculate suitable positions to place PV panels and expected power generation. They rely on much the same data: building heights and vegetation, but OSM data wasn’t good enough for them. Still, maybe one could reuse some of their code.
Beautiful. Another great read on dithering with a similar goal is Lucas Pope’s devlog from creating Return of the Obra Dinn: https://forums.tigsource.com/index.php?topic=40832.msg1363742#msg1363742
Thank you for linking it here. The video references these posts but I didn’t have time to trace them to the forum.
I find the arguments here strange.
Of course, this depends on your infrastructure, but setting up a couple of read replicas is not especially hard.
Well… don’t?
I do appreciate the “Other Views” section! And I agree with those “other views”.
I’m with you.
Also I’d add: The chances of the SQL execution engine having bugs is probably a lot less than your application code will have. So as long as you get your SQL query correct, the chances of having bugs is much lower than if you write it yourself.
Once you have something like PgTAP so you can test your SQL queries and DB, you are in amazing shape for generic well-tested code that is very unlikely to break over time.
IDK. Complex queries are hard to build indexes against and those are hard to test locally since they require a lot of data.
Sometimes perf in app code can be more predictable
I agree that unless you know SQL and your DB, it can be hard to reason about how it handles performance. In your own code, that would be easier, since you wrote it and (hopefully) understand it’s performance characteristics.
I would argue bug-free predictable code is more important than performant code. In my experience, when performance starts to matter is also when you can generally start to dedicate resources specific to performance problems. I.e. you can hire a DB expert in your particular DB and application domain.
I’m not with you on the ‘hard to test locally, since large data is required’ part though. You usually get access to something like PG’s ANALYZE, where you can verify the right indexes are being used for a given query. Sure you might need some large test instance to figure out what the right indexes are, but that’s a development task, not a testing task right? Once you figure it out, you can write a test that ensures the right indexes are being followed for a given query, which is the important part to test, from my perspective.
I have also encountered people (online) who didn’t know that you could render web pages on the server.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
People learn by imitation, and that’s 100% necessary in software, so it’s not too surprising. But yeah this is not a good way to do it.
The web is objectively worse that way, i.e. if you have ever seen an non-technical older person trying to navigate the little pop-out hamburger menus and so forth. Or a person without a fast mobile connection.
If I look back a couple decades, when I used Windows, I definitely remember that shell and FTP were barriers to web publishing. It was hard to figure out where to put stuff, and how to look at it.
And just synchronizing it was a problem
PHP was also something I never understood until I learned shell :-) I can see how JavaScript is more natural than PHP for some, even though PHP lets you render on the server.
To be fair, JSX is a pleasurable way to sling together HTML, regardless of if it’s on the frontend or backend.
Many backend server frameworks have things similar to JSX.
That’s not completely true, one beautiful JSX thing is that any JSX HTML node is a value, you can use all your language at your disposal to create that HTML. Most backend server frameworks use templating instead. For most cases both are equivalent, but sometimes being able to put HTML values in lists and dictionaries and have the full power of the language do come in handy.
Well, that’s exactly what your OP said. Here is an example in Scala. It’s the same style. It’s not like this was invented by react or even by frontend libraries. In fact, Scalatags for example is even better than JSX because it is really just values and doesn’t even need any special syntax preprocessor. It is pure, plain Scala code.
Maybe, but then just pick a better one? OP said “many” and not “most”.
Fine, I misread many as most as I had just woken up. But I’m a bit baffled that a post saying “pick a better one” for any engineering topic has been upvoted like this. Let me go to my team and let them know that we should migrate all our Ruby app to Scala so we can get ScalaTags. JSX was the first such exposure of a values-based HTML builder for mainstream use, you and your brother comment talk about Scala and Lisp as examples, two very niche languages.
When did I say that this was invented by React? I’m just saying that you can use JSX both on front and back which makes it useful for generating HTML. Your post, your sibling and the OP just sound slightly butthurt at Javascript for some reason, and it’s not my favourite language by any stretch of the imagination, but when someone says “JSX is a good way to generate HTML” and the response is “well, other languages have similar things as well”, I just find that as arguing in bad faith and not trying to bring anything constructive to the conversation, same as the rest of the thread.
But the point is that you wouldn’t have to - you could use a Ruby workalike, or implement one yourself. Something like Markaby is exactly that. Just take these good ideas from other languages and use it in yours.
Anyway, it sounds like we are in agreement that this would be better than just adopting JavaScript just because it is one of the few non-niche languages which happens to have such a language-oriented support for tags-as-objects like JSX.
I found that after all I prefer to write what is going to end up as HTML in something that looks as much as HTML as possible. I have tried the it’s-just-pure-data-and-functions approach (mostly with elm-ui, which replaces both HTML and CSS), but in the end I don’t like context switching it forces on my brain. HTML templates with as much strong checks as possible is my current preference. (Of course, it’s excellent if you can also hook into the HTML as a data structure to do manipulations at some stage.)
For doing JSX (along with other frameworks) on the backend, Astro is excellent.
That’s fair. There’s advantages and disadvantages when it comes to emulating the syntax of a target language in the host language. I also find JSX not too bad - however, one has to learn it first which definitely is a lot of overhead, we just tend to forget that once we have learned and used it for a long time.
In the Lisp world, it is super common to represent HTML/XML elements as lists. There’s nothing more natural than performing list operations in Lisp (after all, Lisp stands for LISt Processing). I don’t know how old this is, but it certainly predates React and JSX (Scheme’s SXML has been around since at least the early naughts).
Yeah, JSX is just a weird specialized language for quasiquotation of one specific kind of data that requires an additional compilation step. At least it’s not string templating, I guess…
I’m aware, I wrote such a lib for common lisp. I was talking that in most frameworks most people use they are still at the templating world.
It’s a shame other languages don’t really have this. I guess having SXSLT transformation is the closest most get.
Many languages have this, here’s a tiny sample: https://github.com/yawaramin/dream-html?tab=readme-ov-file#prior-artdesign-notes
Every mainstream as well as many niche languages have libraries that build HTML as pure values in your language itself, allowing the full power of the language to be used–defining functions, using control flow syntax, and so on. I predict this approach will become even more popular over time as server-driven apps have a resurgence.
JSX is one among many 😉
I am not a web developer at all, and do not keep up with web trends, so the first time I heard the term “server-side rendering” I was fascinated and mildly horrified. How were servers rendering a web page? Were they rendering to a PNG and sending that to the browser to display?
I must say I was rather disappointed to learn that server-side rendering just means that the server sends HTML, which is rather anticlimactic, though much more sane than sending a PNG. (I still don’t understand why HTML is considered “rendering” given that the browser very much still needs to render it to a visual form, but that’s neither here nor there.)
The Bible says “Render unto Caesar the things that are Caesar’s, and unto God the things that are God’s”. Adapting that to today’s question: “Render unto the Screen the things that are the Screen’s, and unto the Browser the things that are the Browser’s”. The screen works in images, the browser works in html. Therefore, you render unto the Browser the HTML. Thus saith the Blasphemy.
(so personally I also think it is an overused word and it sounds silly to me, but the dictionary definition of the word “render” is to extract, convert, deliver, submit, etc. So this use is perfectly inline with the definition and with centuries of usage irl so i can’t complain too much really.)
You can render a template (as in, plug in values for the placeholders in an HTML skeleton), and that’s the intended usage here I think.
I don’t think it would be a particularly useful distinction to make; as others said you generally “render” HTML when you turn a templated file into valid HTML, or when you generate HTML from another arbitrary format. You could also use “materialize” if you’re writing it to a file, or you could make the argument that it’s compilers all the way down, but IMO that would be splitting hairs.
I’m reminded of the “transpiler vs compiler” ‘debate’, which is also all a bit academic (or rather, the total opposite; vibe-y and whatever/who cares!).
It seems that was a phase? The term transpiler annoys me a bit, but I don’t remember seeing it for quite a while now.
Worked very well for Opera Mini for years. Made very low-end web clients far more usable. What amazed me was how well interactivity worked.
So now I want a server side rendering framework that produces a PNG that fits the width of my screen. This could be awesome!
There was a startup whose idea was to stream (as in video stream) web browsing similar to cloud gaming: https://www.theverge.com/2021/4/29/22408818/mighty-browser-chrome-cloud-streaming-web
It would probably be smaller than what is being shipped as a web page these days.
Exactly. The term is simply wrong…
ESL issue. “To render” is fairly broad term meaning something is to provide/concoct/actuate, has little to do with graphics in general.
Technically true, but in the context of websites, “render” is almost always used in a certain way. Using it in a different way renders the optimizations my brain is doing useless.
The way that seems ‘different’ to you is the way that is idiomatic in the context of websites 😉
Unless it was being rendered on the client, I don’t see what’s wrong with that. JSX and React were basically the templating language they were using. There’s no reason that setup cannot be fully server-generated and served as static HTML, and they could use any of the thousands of react components out there.
Yeah if you’re using it as a static site generator, it could be perfectly fine
I don’t have a link handy, but the site I was thinking about had a big janky template with pop-out hamburger menus, so it was definitely being rendered client side. It was slow and big.
I’m hoping tools like Astro (and other JS frameworks w/ SSR) can shed this baggage. Astro will render your React components to static HTML by default, with normal clientside loading being done on a per-component basis (at least within the initial Astro component you’re calling the React component from).
I’m not sure I would call a custom file type
.astrothat renders TS, JSX, and MD/X split by a frontmatter “shedding baggage”. In fact, I think we could argue that astro is a symptom of the exact same problem you are illustrating from that quote.That framework is going to die the same way that Next.JS will: death by a thousand features.
huh couldn’t disagree more. Astro is specifically fixing the issue quoted: now you can just make a React app and your baseline perf for your static bits is now the same as any other framework. The baggage I’m referring to would be the awful SSG frameworks I’ve used that are more difficult to hold correctly than Astro, and of course require plenty of other file types to do what
.astrodoes (.rb,.py,.yml, etc.). The State of JS survey seems to indicate that people are sharing my sentiments (Astro has a retention rate of 94%, the highest of the “metaframeworks”).I don’t know if I could nail what dictates the whims of devs, but I know it isn’t feature count. If Next dies, it will be because some framework with SSG, SSR, ISR, PPR, RSC, and a dozen more acronym’d features replaced it. (And because OpenNext still isn’t finished.)
Astro’s design is quite nice. It’s flexing into SSR web apps pretty nicely I’d say. The way they organize things isn’t always my favorite, but if it means I can avoid ever touching NextJS I’m happy.
Please read the section about self-promotion on https://lobste.rs/about
The post is to educate, gets me no money, has no ads, is not related to my employer and not endorsing any products. I did check the box that I am the author. If you want to keep me as a member, please be more welcoming to me. Thank you.
This rule applies to everyone, even if you’re not making money off the posts.
The rule is enforced so inconsistently. Posts with more contention get the rulebook thrown at them, while people like https://lobste.rs/~stapelberg/ or even myself get away with it.
You participate quite a bit outside of your own submissions though, unlike OP who has apparently never commented on something they didn’t write.
As someone that feels discouraged to submit my own work because of this rule, I don’t find this argument to be compelling. What I get from this is, “as long as you contribute to the community in other ways, you can totally break these rules anytime you want, within reason, I guess.” That seems a bit silly to me.
Edit: grammar
Yours is an incorrect reading, and you haven’t submitted anything since you joined of any variety.
It’s not “you can break these rules anytime you want, within reason” so much as “the rule is not to post the vast majority of time your own stuff”. Any reasonable amount of other interaction means you are following that custom. It’s not some big thing, as much as occasional folks kvetch about it.
That custom exists for a reason: people see Lobsters as a marketing or advertising channel and treat it that way. It has been abused in the past. It appears to be being abused in the present. It will doubtless be abused in the future. That custom and vigorous flagging–when people notice!–are one of the few things we have going for us.
You’re new (5 months according to your profile at time of writing), so maybe you haven’t seen this before and it all seems rather arbitrary–so, I’ll try and explain:
Lobsters is a slow-moving site (compared to, say, the orange site or a decently popular subreddit). Posts here stay on the frontpage for a day or two easily. In advertising terms, a slot here stays up for a while and gets a lot of impressions by a pretty well-defined (and valuable!) audience. I have heard valuations in the low 5 figures. This makes us a target for growth hackers (and the modlog is full of efforts to kick them out when they are discovered).
We could ban self-promotion altogether, but it turns out that sometimes people do write interesting things themselves and would like to share them. We could always allow self-promotion, but that is basically incompatible with trying to prevent growth-hacking and marketing.
So, instead, we make the compromise of “If you post here and mostly bring in other interesting things, and occasionally your own stuff, that’s probably okay”. This means that anybody who actually is in it just to shill their own work must also do what amounts to community service, and that tends to give an easy way of spotting growth hackers and free riders.
I think it’s weird that you point out that I comment, but haven’t submitted anything, as if that has any impact on my argument here. Likewise, having a newer account doesn’t bar me from having an opinion or disagreeing with someone else. I’m clearly not some free loading self promoter. I’m a real person that just happens to disagree with that comment.
I’m not disagreeing with the custom. The notion that it’s not always applied fairly does resonate with me, though.
The rule is not “only 25% of your submissions can be your own”, its “25% of stories and comments”, so what you call “other ways” is explicitly part of it. But yes, I’m arguing that there is a fundamental difference between “doesn’t participate at all outside own submissions” (which is explicitly called out as what the rule wants to prevent) and “does participate regularly, but maybe doesn’t meet the rule of thumb threshold”.
Agree. I think these warnings are something that could and should be automated instead of people haphazardly being chastised. Most guidelines aren’t easy to automate, but this one would be. (And I definitely think it’s a good rule!)
If an exception is made for some people who only come here to post releases/links that a lot of people are interested in (lobste.rs/~aphyr comes to mind), then that can be explicitly flagged instead of them being unofficial royalty.
Hi. I’m fine losing you as a member if you only comment on your own stuff and (with one exception) only post your own writing. We’ve lost longer and more engaged contributors over less.
“Friendlysock, this isn’t terribly friendly at all!”–okay, sure, but you’re engaged in behavior that is indistinguishable from a long list of bad-faith and near-bad-faith actors. I’m sure lots of them have similar sob stories if given the opportunity.
In order to avoid this, for what it’s worth, all you have to do is comment a few times in other stories, submit interesting learnings that you didn’t write, and not be a shithead. It isn’t a particularly high bar, and if you find it difficult to clear there are many other communities that you’d doubtless find more welcoming.
The idea is to discourage folks who are here only to submit their own work, and only to participate in comments on their own stories — it’s a community, not a publication channel. This behaviour is unwelcome.
Very useful!
Regarding relativity: it says we can ignore it, with a footnote:
However, I’d also point out that relativity is appropriate when dealing with distributed systems; e.g. no universal clock, observers disagreeing on the order of events, etc. I’m not saying we should take Lorentz transforms of timestamps, or whatever; but that it gives the right intuition (there is no preferred observer/reference frame, communication latency is the defining feature of spatial separation, causality is a fundamental relation agreed by all observers, etc.), rather than trying to patch-up a newtonian intuition (e.g. trying to substract latency from timestamps to get the “real” time, or comparing sequences of events with a re-ordering tolerance of 1 second, etc.)
Lamport says relativity was one inspiration for his research: https://lamport.azurewebsites.net/pubs/pubs.html?from=https://research.microsoft.com/users/lamport/pubs/pubs.html&type=path#time-clocks
That is such a classic paper. I did my MSc looking at event tracing in distributed systems (in the days way before OpenTelemetry and all of the APM companies) and that paper kind of blew up my notion of what kinds of problems I needed to solve. In undergrad I’d taken a Modern Physics course so I was already mentally primed for ideas like the future light cone and being unable to define a total ordering of events when there’s spatial separation but that paper just drove it home as highly relevant to the work I was doing.
Thank you, great insight!
This is one of the things I love about Ruby. Most of the syntax is just sugar for a method call (or, technically, sending a message), and with the ability to monkeypatch everything you can rewrite the language to suit your fancy within Ruby itself!
What’s cool about it is that it’s not like a Lisp or Rust or C or other languages that let you define macros which require a separate pass in the compiler to expand them (although in Lisps the line between macro and language construct is very blurred by virtue of the syntax) or like a Forth where you have to define parsers for new words; Ruby has a well-defined syntax and no macro system, so it’s actually more similar to defining
__enter__and__exit__in Python to make a class compatible with thewithsyntax. It’s just that Ruby has syntax sugar for method calls where other languages would have custom language constructs, so you can often fall back to the “core language” to redefine what a particular bit of syntax desugars to.Languages often have these bits of syntax or other features that are linked to a restricted set of types and that you can’t instrument other types to use them with (like
?in Rust, which currently only works onResultandOptioninstead of definining aTrytrait that you can implement on other types, or Elm’s special type classes like “number” that are baked into the compiler, which probably take after SML’s arithmetic functions that work on bothintandreal, etc). I like to call these things “compiler magic”, and I think it’s nice that Ruby has very little compiler magic.I don’t think that’s quite right. That sounds similar to Common Lisp macros, but less powerful.
A macro in Common Lisp is a function that runs at compile (or interpret) time and transforms one block of code into another block of code before actually compiling or interpretting it.
It boils down to the same thing in a purely interpretted language like Ruby because “runtime” and “compile time” (or “interpretation time” in Ruby, I guess), but makes a major difference when compiling.
This meta programming style is mostly inherited from Smalltalk!
Looks like author try to be smart and overcomplicated blog post unnecessarily.
?????
I’ve used dokku for 10+ years in various projects. It’s reliable and simple. (And yes, you can use it with pre-built images.)
https://dokku.com/
The problem is that such articles have to claim superiority by denigrating another paradigm with examples and code that shows poor practice.
People like FP because it is well suited to the transformation of data, and for many programmers that is all there is. Try modeling a complex payroll system into code using such a transformation of data approach. Enterprises tried that for over a decade during the “structured analysis” era, but as “engineers” we all know that don’t we?
I have no idea what you are alluding to but I am honestly very intrigued. As someone who likes functional programming and complex enterprise systems I would definitely like to hear your thoughts/experiences regarding why FP might not be a good match for enterprise software development. (I do know about structured design, if you’re referring to the Yourdon/Constantine sort of thing. But not sure how it’s connected here.)
As for the reasons why people like FP, I agree transformation of data is one factor, but I would say it’s “pick 3-5 out of:”
Most of those can be found in non-FP languages, but when you have a critical number of them come together in a cohesive way, that certain magic starts to happen. There is no single definition that fits all languages that are commonly categorized under the FP label. If I were to pick one, I would say “high-order functions as the default and most convenient way of doing everything”.
Indeed. You can do FP in most languages, and you can do OOP in most languages (even C!), but if the language is designed and optimized for certain styles/patterns of programming, then you’re usually giving up both convenience/ergonomics and performance by going against the grain. Case in point: JavaScript devs trying to lean into FP patterns and making copious, unnecessary, copies of local arrays and objects just for the “immutable-only cred” or writing a “curried” functions when that means actually creating tons of temporary Function objects that need to be garbage collected at runtime (as opposed to languages like OCaml that are able to optimize these things so well that curried functions mostly don’t have any runtime overhead compared to uncurried functions).
But, I will say this: I really don’t love FP (as in the “use mostly pure functions to just transform data” definition) for IO-heavy applications (really, almost anything with a DBMS underneath). Doing “proper” FP style is often the antithesis of “mechanical sympathy”, meaning we have to choose between writing elegant data transformation code and writing code with reasonable performance characteristics.
For example, it’s extremely common when working with a SQL database to open a transaction, read some data, make a decision based on that data, and then conditionally read or write more data. To implement this in conventional FP style, you have basically two options:
Break up this unit of logic into multiple pure functions and have your top-level handler piece them together like IO -> pure function 1 -> IO -> pure function 2 -> IO. That’s fine if your pure functions actually make sense to be separate named operations. Otherwise, you’re breaking something that is semantically a single business operation into multiple pieces that don’t actually mean anything in your domain.
Read ALL of the data you could possibly need upfront and then feed it into the pure function before writing the final result. This is obviously a bad idea in any system that is supposed to be able to scale. Doing extra database queries when you might not even use the results should send you straight to jail.
This line of argument will never be convincing until the development of an external method to establish that a given person or creature has internal experience. For now we have our own internal experience and verbal attestation.
That is exactly the argument. You know that you have internal experience. If you (and everyone else) didn’t, the issue would never arise. No-one would every come up with the weird question of whether some machine (or whichever system) has internal experience unless we knew for a fact that we do. It’s not a concept that would pop up out of nowhere. There will never be a more convincing argument, and there will never be a method to establish objectively that some person/creature has it.
Well, one thing I do know from my limited knowledge of what buddhism claims is that we do not have consciousness or internal experience all the time. We will instead experience moments of self-awareness and then back-project that this was always the case during whatever task we were doing for the past length of time. Once you get some meditation training you can catch yourself doing this. It is reasonable to say that I am not experiencing self-awareness right now since I am not in one of those moments, even though I am currently writing about it. But clearly I am acting and typing as though I have internal experience in this moment. Why could the same not be true for a machine, or an animal? Why could it not also have moments of self awareness?
It’s less “why isn’t it possible” and more “how do we even get to that point”. It’s not about “could it be done” and more “what does the path to arrive at the point where we’ve done it even look like”.
To me, a lot of GAI discussion feels like talking about the future of knee surgery before the invention of germ theory. I think there are thousands of “unknown unknowns” in methodology, thinking, reasoning, and theory on the journey to reach GAI and it’ll probably take about a couple of hundred years to unshake all of those as an optimistic estimate.
But that is all of modern medicine; our technology is far, far below the level of the crazy future nanomachines that run our body. Experimentation runs ahead of theory and people only have foggy theories about why various drugs work, especially psychiatric medications. Given that an unthinking process (evolution) produced us, it is totally conceivable that GAI could be produced through experimentation following a gradient of abilities without understanding what consciousness is. I’m not a “GAI is imminent” person but the idea that figuring this stuff out is a precondition to producing consciousness is not true; actually I think the only route to figuring this stuff out is through producing GAI.
That’s my point. We’re people from the 1400s discussing the future of online banking. We’re stone age people talking about car mechanics and industrial resin production.
I mean, I don’t think we’ll have consciousness Totally Figured Out when/if we develop GAI, but there’s a long long road to where it is and I don’t think we can make any reasonable estimations other than “There are a lot of stepping stones to get there that totally change the surface of the pond”
My following comment feels massively off-topic now I’ve written it but: In Buddhist literature, “consciousness” (vijnana, also called discernment) is one of the five factors of clinging and always present, and one’s belief that they make up our “self” or “soul” is something that causes one’s suffering. There are multiple aspects of consciousness depending on school, i.e. “sense consciousness”, consciousness arising from the simulation of one of the senses, or “mind consciousness”, consciousness arising from mental factors, the “thinking” consciousness. It is my understanding of Buddhist philosophy that consciousness is always there in some lifetime, whether you are clinging to it, aware of it, heedful of it or not. But it is different to “self-awareness”, which might be better described as “consciousness of consciousness” or such.
As noted in another comment, that is not what Buddhism claims. Mindfulness or meta-awareness (knowing that you know) might be rare states, but basic conscious experience is ever present. When you are thinking “I am not experiencing self-awareness right now”, you are aware. Without awareness there can be no thought, feeling, sense-impression etc.
Which is it? Are you merely acting as if you had internal experience? If so, yes clearly a machine can also act as if it had an internal experience (eg by outputting the string “I have an experience”). Or do you actually have internal experience? If so, can a machine have that? That’s a much more interesting but completely different question. (A lot of people disagree that it’s a different question, of course, but we won’t resolve that here.)
I’m pretty sure by this definition people don’t remember either, we simulate memory. Unless the precision and immutability of computers is the simulation of human memory which instead is a continuous reimagining of the past.
I’m inclined to go with your second statement, yeah! “Memory” was extended to computers from humans by analogy, after all.
lost me at the uncle bob quote. /s
Anyway, yes almost completely agree. This must explain the sense of deep dread I feel when trying to learn a new kubernetes concept that makes me want to close the page and reach for a drink.
I might be tarred and feathered for saying this, but I believe that Kubernetes is a perfect example of what “write dumb code” looks like in the pathological case.
If your code is not able to represent the inherent complexity of the problem domain, that complexity goes somewhere. Since k8s is written in a language that is arguably too simple for the problem, the complexity has moved to the configuration. A language with a more sophisticated type system could produce much simpler configuration.
Interesting! Could you expand a bit or give an example?
OO programming continues to be the dominant model of programming.
Class inheritance as the golden hammer has definitely receded, though.
What does OOP even mean if not “class inheritance”? It seems like OOP proponents have broadened the definition over time to include data oriented and functional programming paradigms–I suspect the Java and C# that is written today, with its emphasis on functional concepts–would not have been considered “OOP” by practitioners in 2002. Moreover, what is the difference between “OOP with minimal class inheritance” and say idiomatic Go or Rust? It feels like “OOP” is too poorly defined to be useful (its definition expands and contracts depending on convenience), but maybe I’m mistaken here?
Inheritance, encapsulation, dynamic dispatch. OO languages started to incorporate more things but those have remained central them. Other languages have found other ways of implementing those concepts (traits, generics, unexported fields, etc) so it blurred the lines.
So if you write Java without inheritance, or if you use inheritance only as a mechanism to automatically delegate methods to some member (as a convenience, like Go’s struct embedding syntax sugar), is that considered OOP or no? If yes, then would Go and Rust be considered OOP, since they also support encapsulation and dynamic dispatch? Or does OOP require more emphasis on inheritance to distinguish it from other paradigms?
That static classification is not a good way to think about it IMO. Most languages can express different behaviors and you can program in more than one style in them. The general way in which one programs in them tells us a lot more about the language.
Before ALGOL languages didn’t really have functions/procedures, control flow, etc. Then languages like C and Pascal popularized it and became known as “procedural languages”. Now every single language has that, including OO languages, so are they all just procedural languages? I’d say no, because the style is just different and adds different ideas.
Languages evolve and absorb concepts in their own ways. Go and Rust have incorporated some of those OO ideas in their own way but it’s a different style. It’s hard to reduce most languages to a handful of concepts. Given enough time a concept that was once associated with “Functional” or “Object-Oriented” just become table stakes and everyone includes them.
Yes, I’d say inheritance would be the most defining characteristic, along with the encapsulation and dynamic dispatch. Inheritance is mostly used to take advantage of the other two concepts, probably most people today recognize that the taxonomy part of inheritance is not really all that helpful.
I agree with this, and I probably should have taken more care to say something like “are idiomatic Go and Rust programs considered OOP”. I also agree that there’s some degree of orthogonality between these different paradigms, but if they’re completely orthogonal then the original claim that OOP is “the dominant model” seems meaningless.
Sure, but again (to your earlier point) I’m less interested in the features available in languages and more how programs are written. If a program doesn’t use (or makes only superficial use of) inheritance but does use encapsulation and dynamic dispatch, then can we credibly say it’s an OOP program? If so, then I would argue that much of what is considered “good, modern Java and C#” is not really OOP, and that the original claim that OOP is “the dominant model” is largely incorrect.
And conversely, if we allow for such inheritance-eschewing programs to be considered “OOP”, then suddenly a lot of idiomatic Go, Rust, Haskell, OCaml, etc programs become “OOP” insofar as they avail themselves of encapsulation and dynamic dispatch. At that point it seems like “OOP” loses a lot of its meaning and thus its conceptual utility.
I believe the exact term you’re looking for is “structured programming”. And all mainstream languages today are definitely structured.
Yes, the problem is exactly that OOP isn’t a coherent enough concept to be useful.
I think the best definition (hinted at elsewhere in the thread) below is “mash together inheritance, encapsulation, polymorphism, and perhaps message-passing” but nearly every discussion about how to build software would be improved if you stopped talking about OOP and talked instead about inheritance, encapsulation, polymorphism, and message-passing as distinct concepts that are useful in different places.
(Most discussions of OOP don’t mention message-passing at all, which is weird because the person who coined the term OOP actually meant it in a way that put messages first, but language changes with use and all, so I get it.)
Very well said; “OOP” as a term has always felt like it subtracts from, rather than adds to, conversational clarity.
OOP means modularity, instantiation, and “good”.
Modularity: The program is separated into various modules, each comprising the definition of a data structure, and functions and procedures using (or operating on) that data structure. For instance: a C compilation unit, an ML module, or a Java class.
Instantiation: Basically the rejection of global variables, such that you can have several instances of the same data structure. Very useful for instance when you want several parsers in your program.
“Good”: Because indeed the common denominator of OOP doesn’t include much more than the above. Which isn’t much at all, considering the above is also present in procedural and functional programming. The definition of the day will always add ever changing details, depending on what is considered “good” at the time. If it’s good, you can be sure OOP will eventually steal it. If it’s not, it will get kicked out of being a fundamental part of OOP eventually.
The point is to make sure “OOP” never dies. It will change its own meaning to adapt to modern practices, and then look back at its made up longevity to be able to say it was right all along. Some things though are probably safe from ever changing though: modularity and instantiation for instance are so universally useful that they’ll never be kicked out.
I stopped worrying about naming programming paradigms years ago. Just try stuff and keep what works.
It sounds like you’re implying that the term “OOP” is meaningless, which I very much think is true, but why not just come out and say it explicitly?
Err, yes it is. I’m just trying to be polite about it. :-)
Let’s ask the person who coined the term back in the ’60s: Alan Kay. He defined it as:
Of these, I think the only one that people would disagree with now is the last one. There are places for extreme late binding, but things like structural and algebraic type systems let you program as if you have extreme late binding but get the performance of early binding.
I mean, I’ve heard this before, but I’ve heard lots of OOP proponents reject that definition. When I was learning OOP back in the early 2000s, precisely none of the content I saw discussed message passing—it always emphasized classes and inheritance and architecting your software according to a “Kingdom of Nouns”. I’m fine with the definition, it just doesn’t match what I understand most other people to mean by the term.
There’s a lot of cargo-cult OO stuff. Some OO languages have feature X, therefore X must be the important bit! But most of it misses the point. Data hiding and loose coupling are the key parts. Inheritance is a bolted-on optional extra. Self had inheritance, but via (multiple) prototype chains and not as part of a type system. Languages like C++ and Java tried to improve performance by coupling implementation sharing with subtyping, which in hindsight was an absolute disaster.
Language follows usage. Just because one person used a term first doesn’t mean they get to define it forever. Kay’s definition is more useful than the popular one, but at this point things are so muddled that neither definition actually improves clear communication, and both should be abandoned.
Perhaps better to consider the viewpoints of those who actually invented it. Hint: it wasn’t Alan Kay. His viewpoint is entirely related to Smalltalk, which other than “objects all the way down” and meta-Lisp like concepts offered nothing new in terms of OO. Not objects, classes, functional polymorphism or inheritance. Not even object.method (dot) notation.
He coined the term ‘object oriented’ to describe the style of Lisp he wrote. He created Smalltalk to embody it.
Here’s how I explain OOP to people:
First and foremost, it’s about organization. The dominant – sometimes the sole – form of organization in the class – or in some cases the object, since some languages are not class-based. The organization is naturally around data structures, with behaviors tied to those data structures. A humorous look at this is the “kingdom of nouns” blog from years back: https://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html
With that organization style being natural and dominant, a major benefit is encapsulation. Instead of “code” being unorganized and free floating, it tends to be glued onto some relevant data structure (the combination is a class / object). Encapsulation allows for data hiding (usually a very good thing).
Composition takes many forms, and vary widely from language to language. Interface-based. “Pure virtual” classes. Traits and mixins. Aggregation. Delegation. But the mostly widely known form of composition is inheritance. The big question of “composition” in any programming language is: “How do I get exactly what I want, in a manner that will be obvious to the reader, without having to repeat myself?” And for class-based OO languages, that means “How do I reuse this ‘class’ thing as a whole?” Obviously, inheritance does exactly that. But so do traits and mixins. So do aggregates and type delegations.
Virtual behavior (virtual function calls) are a natural side-effect of some of these composition forms, because of the layering that occurs when classes are combined (inheritance, traits, mixins). Virtual behavior is extremely useful, but it does come at a cost. For example, in the performance realm, a virtual call has around a 29 clock cycle cost on x64 for Intel’s chips, plus a pointer chase (the virtual function table) which impacts CPU cache usage. While this has no measurable negative impact on a typical business application, it can easily have a significant impact on a low-level library.
Like a lot of broad terms, the utility is not in the exact definition, but in the ability to provide general context. But I’m not sure what you’re expecting in terms of “useful” here. The term itself is just a means to communicate, among people who probably already know a bit about the topics being communicated.
Every time “OOP” comes up, the majority of the comments are arguments about what it actually means.
This is not a phenomenon that happens with useful, clear terminology. It’s time to move on and use better words.
Object-oriented code that is occasionally spiced with lambdas is something completely different than code in language that allows only functions. The „emphasis“ means mostly that people talk rather about newly added features than ones that are present for decades. And it is mainly a syntactic sugar for equivalent classes and patterns. See also my other comment.
For some things perhaps, for others not. It remains dominant for teaching and representing Abstract Data Types. It remains dominant for the creation of development and runtime frameworks for platforms. It remains dominant for modeling a large proportion of simulations.
However, it is no longer dominant (sadly was it ever) for representing a business problem domain model. This was the real intention of the inventors of OO, particularly Kristen Nygaard. The modeling of colaborating phenomena and concepts of a referent system. The so called “Scandinavian school” viewpoint. Note that for my purposes just because one is using an OO language doesn’t equate to OO programming.
I think a great deal of confusion stemmed from the confusion of OO programming with OO modeling. The latter can be taken too far, but has obvious strong points. The former was an interesting idea that was quickly obfuscated. The best realization of (the best) OO ideas might be in Erlang, although they didn’t intend that, just as they didn’t intend to create a functional language.
From your comment, I am reminded of ER, which I found quite useful for designing and building applications: https://en.wikipedia.org/wiki/Entity%E2%80%93relationship_model
It’s not “SQL”. It’s not “OO”. But it’s enormously useful for both.
Edit: I’d also add Domain Driven Design (DDD) to this. Eric Evans, Jimmy Nilson, Vaughn Vernon, etc.
I’ve been running essentially this setup in production for 1,5 years, with a Broadway application [1] accepting messages and delegating most of the work to Gleam code. Gleam is what finally got me to commit to the BEAM, because a strong type system is a big productivity booster for me, while the Elixir ecosystem (esp LiveView) is just fantastic and delightful.
However, note that mix_gleam is a bit rough and doesn’t work well with the Gleam language server. You won’t get proper LS feedback, and false compiler warnings, mostly due to Gleam not being able to detect the transitive dependencies handled by mix. A fix for this is to manually add transitive dependencies to the gleam.toml file, but that’s tedious. There is an open PR to semi-automate this [2].
Here’s another approach to using Gleam in Elixir projects, without mix_gleam: https://blog.nytsoi.net/2024/06/26/gleam-deps-in-elixir
It does seem to suffer from the same problem with recursive dependencies, however. (According to recent discussion on the Gleam discord server).
In short, the integration is not yet as tight as one might wish. A simpler approach is to set up a separate Gleam project and use it as a path dependency from Elixir.
In any case: Elixir + Gleam is heaven!
[1] elixir-broadway.org [2] https://github.com/gleam-lang/mix_gleam/pull/39
Curious if you tried https://devenv.sh/
Do you see any of the specific “Bad” or “Ugly” points described in the post being addressed by Devenv? (I’m a Devenv fan, for the record.)
My point is that you can manage software development using Nix on MacOS effectively.
Slapping configuration management on top of pre-baked OS is as Drake points out, a fight against a philosophy that Apple picked.
Hi, I haven’t, but it’s on my todo list. Just had no opportunity, since most of my dev environments fit in a devshell flake. Would be happy to try it out and share my experience, as I am also a big fan of your work.
Yes - contributed too! I was successful in using it to replace Docker Compose for starting services in one of our big projects (yay!) but I have been waiting for the ability to create MongoDB initial accounts before I introduced it to my team, and haven’t had time to revisit since that landed.
I spent over a decade in ecommerce product configuration and customization. The logic was gnarly for most projects. BREs were a siren song. Fortunately, many programmers had made the mistake of using them ahead of us. Hearing misfortunes like this one was sobering. They merely move the problem of complex logic to a kind of poor-man’s DSL that only one or two people know how to use. That said, I do still wonder sometimes if Prolog would have suited some forms of constraint logic like this better than SQL and mainstream, general purpose languages. But the track record for Prolog seems… limited.
The problem with Prolog IME is that it’s almost too powerful and “elegant”. Prolog has two very impressive features:
However, I’ve found that these two features sometimes make it hard to debug programs. The problem is that all Prolog programs that are syntactically valid are semantically valid as well—even if you attempted to do something “nonsensical”, like checking a list for equality against a a number, because all constructs bottom out at relations.
Backtracking suffers from the same issue (although not as bad); it’s extremely powerful but it can be difficult to debug and easy to introduce by accident.
I think these two factors are some of the reason we don’t see more Prolog “in the large”. That said, Prolog is a wonderful language and I recommend everyone to learn it regardless!
(Michael Hendrix has a good video about running Prolog in production: https://youtu.be/G_eYTctGZw8)
Those are orthogonal. Homoiconicity has nothing to do with whether there is only one construct. Lisp was never like that.
Is there any dynamic language where comparing two values of different types is an error? Doing an equality check between a list and a number is perfectly fine in Python.
Yeah, that seems like it’d be all kinds of weird in a language in which everything is just one construct.
Do you think there is scope for doing this kind of system with datalog rather than prolog and would that avoid any problems that you see in a prolog-based system?
For context, I have some interest in https://www.biscuitsec.org/ where authorisation logic is expressed via datalog programs. Wondering if you have any insight?
For small programs I think both Prolog and Datalog would work really well. For “programming in the large” I think something like Mercury sounds very promising, but it’s still a very small, niche language. Verse seems kind of similar, but I’m not familiar enough with it to make any judgments.
As I mentioned in another comment, I think Answer Set Programming is pretty much in the sweet spot for these use cases. It has none of the footguns of Prolog (including backtracking), behaves more like a Datalog but with quite a bit more power, has very natural semantics, and acts pretty much as a black box oracle answering queries based on you rules.
(Shameless plug, I recently released an interview going pretty deep into real-world use of ASP: https://thesearch.space/episodes/6-adam-smith-on-answer-set-programming)
One of my ongoing goals is to embed Prolog into other languages to improve uses-cases like this (so far the fruits of my effort are a Trealla Prolog wasm port with Go and JS embeddings). Currently the most annoying part is converting data back and forth, I’d like to experiment with some fancier FFI functionality. The SWI-Prolog wasm port has a neat JS FFI interface that I may steal.
I think Prolog is unfamiliar enough that a lot of devs aren’t comfortable with writing an entire project in it, but I hope to improve that familiarity by easing into adoption through rules engine/scripting-type usages. Personally I have been enjoying it a lot. Being able to query not just your DB but the code itself is an awesome superpower :)
In terms of syntax and also implementation heritage, Erlang directly descends from Prolog and has been wildly successful in the domains it was intended to be used for.
My experience tells me differently. At my previous job, for the team I was on, we had an ever evolving set of business logic while the rest of the system remained (largely) the same. Yet because we did not have a separate
business_logic()function, testing wasn’t easy. The problem we had was a mixing of I/O and logic. It sucked and it didn’t have to be that way.I was going to reply to this article to tell an identical story.
You don’t need a business rules engine, but having some kind of forcing function to keep the “business rules” a pure function is invaluable.
Could you have solved your software engineering problem by programming an enterprisey business rules engine in XML?
I doubt it—we had to run the business logic as part of a call flow (customer was the Oligarchic Cell Phone Company), so it was in C/C++.
Not sure if you were being snarky, but rules engines don’t need to be enterprisey or based on XML. There are several lightweight alternatives. (Though my preference would be something from the logic programming world, such as Answer Set Programming, which has much clearer semantics.)
Controlled side effects via algebraic effects or monads seems like a good halfway house.
Most of my Haskell work tends toward determining the set of primitives needed to write the code the way that makes the most sense, and then encoding the rules in that way.
It ends up being a module-scoped DSL.