What surrprised me about Tainter’s analysis (and I haven’t read his entire book yet) is that he sees complexity as a method by which societies gain efficiency. This is very different from the way software developers talk about complexity (as ‘bloat’, ‘baggage’, ‘legacy’, ‘complication’), and made his perspective seem particularly fresh.
I don’t mean to sound dismissive – Tainter’s works are very well documented, and he makes a lot of valid points – but it’s worth keeping in mind that grand models of history have made for extremely attractive pop history books, but really poor explanations of historical phenomena. Tainter’s Collapse of Complex Societies, while obviously based on a completely different theory (and one with far less odious consequences in the real world) is based on the same kind of scientific thinking that brought us dialectical materialism.
His explanation of the fall of the evolution and the eventual fall of the Roman Empire makes a number of valid points about the Empire’s economy and about some of the economic interests behind the Empire’s expansion, no doubt. However, explaining even the expansion – let alone the fall! – of the Roman Empire strictly in terms of energy requirements is about as correct as explaining it in terms of class struggle.
Yes, some particular military expeditions were specifically motivated by the desire to get more grain or more cows. But many weren’t – in fact, some of the greatest Roman wars, like (some of) the Roman-Parthian wars, were not driven specifically by Roman desire to get more grains or cows. Furthermore, periods of rampant, unsustainably rising military and infrastructure upkeep costs were not associated only with expansionism, but also with mounting outside pressure (ironically, sometimes because the energy per capita on the other side of the Roman border really sucked, and the Huns made it worse on everyone). The increase of cost and decrease in efficiency, too, are not a matter of half-rational historical determinism – they had economic as well as cultural and social causes that rationalising things in terms of energy not only misses, but distorts to the point of uselessness. The breakup of the Empire was itself a very complex social, cultural and military story which is really not something that can be described simply in terms of the dissolution of a central authority.
That’s also where this mismatch between “bloat” and “features” originates. Describing program features simply in terms of complexity is a very reductionist model, which accounts only for the difficulty of writing and maintaining it, not for its usefulness, nor for the commercial environment in which it operates and the underlying market forces. Things are a lot more nuanced than “complexity = good at first, then bad”: critical features gradually become unneeded (see Xterm’s many emulation modes, for example), markets develop in different ways and company interests align with them differently (see Microsoft’s transition from selling operating systems and office programs to renting cloud servers) and so on.
However, explaining even the expansion – let alone the fall! – of the Roman Empire strictly in terms of energy requirements is about as correct as explaining it in terms of class struggle.
Of course. I’m long past the age where I expect anyone to come up with a single, snappy explanation for hundreds of years of human history.
But all models are wrong, only some are useful. Especially in our practice, where we often feel overwhelmed by complexity despite everyone’s best efforts, I think it’s useful to have a theory about the origins and causes of complexity, even if only for emotional comfort.
Especially in our practice, where we often feel overwhelmed by complexity despite everyone’s best efforts, I think it’s useful to have a theory about the origins and causes of complexity, even if only for emotional comfort.
Indeed! The issue I take with “grand models” like Tainter’s and the way they are applied in grand works like Collapse of Complex Societies is that they are ambitiously applied to long, grand processes across the globe without an exploration of the limits (and assumptions) of the model.
To draw an analogy with our field: IMHO the Collapse of… is a bit like taking Turing’s machine as a model and applying it to reason about modern computers, without noting the differences between modern computers and Turing machines. If you cling to it hard enough, you can hand-wave every observed performance bottleneck in terms of the inherent inefficiency of a computer reading instructions off a paper tape, even though what’s actually happening is cache misses and hard drives getting thrashed by swapping. We don’t fall into this fallacy because we understand the limits of Turing’s model – in fact, Turing himself explicitly mentioned many (most?) of them, even though he had very little prior art in terms of alternative implementations, and explicitly formulated his model to apply only to some specific aspects of computation.
Like many scholars at the intersections of economics and history in his generation, Tainter doesn’t explore the limits of his model too much. He came up with a model that explains society-level processes in terms of energy output per capita and upkeep cost and, without noting where these processes are indeed determined solely (or primarily) by energy output per capita and upkeep post, he proceeded to apply it to pretty much all of history. If you cling to this model hard enough you can obviously explain anything with it – the model is explicitly universal – even things that have nothing to do with energy output per capita or upkeep cost.
In this regard (and I’m parroting Walter Benjamin’s take on historical materialism here) these models are quasi-religious and are very much like a mechanical Turk. From the outside they look like history masterfully explaining things, but if you peek inside, you’ll find our good ol’ friend theology, staunchly applying dogma (in this case, the universal laws of complexity, energy output per capita and upkeep post) to any problem you throw its way.
Without an explicit understanding of their limits, even mathematical models in exact sciences are largely useless – in fact, a big part of early design work is figuring out what models apply. Descriptive models in humanistic disciplines are no exception. If you put your mind to it, you can probably explain every Cold War decision in terms of Vedic ethics or the I Ching, but that’s largely a testament to one’s creativity, not to their usefulness.
Furthermore, periods of rampant, unsustainably rising military and infrastructure upkeep costs were not associated only with expansionism, but also with mounting outside pressure (ironically, sometimes because the energy per capita on the other side of the Roman border really sucked, and the Huns made it worse on everyone).
Not to mention all the periods of rampant rising military costs due to civil war. Those aren’t wars about getting more energy!
Tainter’s Collapse of Complex Societies, while obviously based on a completely different theory (and one with far less odious consequences in the real world) is based on the same kind of scientific thinking that brought us dialectical materialism.
Sure. This is all about a framing of events that happened; it’s not predictive, as much as it is thought-provoking.
Thought-provoking, grand philosophy was certainly a part of philosophy but became especially popular (some argue that it was Nathaniel Bacon who really brought forth the idea of predicting progress) during the Industrial Era with the rise of what is known as the modernist movement. Modernist theories often differed but frequently shared a few characteristics such as grand narratives of history and progress, definite ideas of the self, a strong belief in progress, a belief that order was superior to chaos, and often structuralist philosophies. Modernism had a strong belief that everything could be measured, modeled, categorized, and predicted. It was an understandable byproduct of a society rigorously analyzing their surroundings for the first time.
Modernism flourished in a lot of fields in the late 19th early 20th century. This was the era that brought political philosophies like the Great Society in the US, the US New Deal, the eugenics movement, biological determinism, the League of Nations, and other grand social and political engineering ideas. It was embodied in the Newtonian physics of the day and was even used to explain social order in colonizing imperialist nation-states. Marx’s dialectical materialism and much of Hegel’s materialism was steeped in this modernist tradition.
In the late 20th century, modernism fell into a crisis. Theories of progress weren’t bearing fruit. Grand visions of the future, such as Marx’s dialectical materialism, diverged significantly from actual lived history and frequently resulted in a magnitude of horrors. This experience was repeated by eugenics, social determinism, and fascist movements. Planck and Einstein challenged the neat Newtonian order that had previously been conceived. Gödel’s Incompleteness Theorem showed us that there are statements we cannot evaluate the validity of. Moreover many social sciences that bought into modernist ideas like anthropology, history, and urban planning were having trouble making progress that agreed with the grand modernist ideas that guided their work. Science was running into walls as to what was measurable and what wasn’t. It was in this crisis that postmodernism was born, when philosophers began challenging everything from whether progress and order were actually good things to whether humans could ever come to mutual understanding at all.
Since then, philosophy has mostly abandoned the concept of modeling and left that to science. While grand, evocative theories are having a bit of a renaissance in the public right now, philosophers continue to be “stuck in the hole of postmodernism.” Philosophers have raised central questions about morality, truth, and knowledge that have to be answered before large, modernist philosophies gain hold again.
I don’t understand this, because my training has been to consider models (simplified ways of understanding the world) as only having any worth if they are predictive and testable i.e. allow us to predict how the whole works and what it does based on movements of the pieces.
Models with predictive values in history (among other similar fields of study, including, say, cultural anthropology) were very fashionable at one point. I’ve only mentioned dialectical materialism because it’s now practically universally recognized to have been not just a failure, but a really atrocious one, so it makes for a good insult, and it shares the same fallacy with energy economic models, so it’s a doubly good jab. But there was a time, as recent as the first half of the twentieth century, when people really thought they could discern “laws of history” and use them to predict the future to some degree.
Unfortunately, this has proven to be, at best, beyond the limits of human understanding and comprehension. This is especially difficult to do in the study of history, where sources are imperfect and have often been lost (case in point: there are countless books we know the Romans wrote because they’re mentioned or quoted by ancient authors, but we no longer have them). Our understanding of these things can change drastically with the discovery of new sources. The history of religion provides a good example, in the form of our understanding of Gnosticism, which was forever altered by the discovery of the Nag Hammadi library, to the point where many works published prior to this discovery and the dissemination of its text are barely of historical interest now.
That’s not to say that developing a theory of various historical phenomenons is useless, though. Even historical materialism, misguided as they were (especially in their more politicized formulations), were not without value. They forced an entire generation of historians to think more about things that they never really thought about before. It is certainly incorrect to explain everything in terms of class struggle, competition for resources and the means of production, and the steady march from primitive communism to the communist mode of production – but it is also true that competition for resources and the means of production were involved in some events and processes, and nobody gave much thought to that before the disciples of Marx and Engels.
This is true here as well (although I should add that, unlike most materialistic historians, Tainter is most certainly not an idiot, not a war criminal, and not high on anything – I think his works display an unhealthy attachment for historical determinism, but he most certainly doesn’t belong in the same gallery as Lenin and Mao). His model is reductionist to the point where you can readily apply much of the criticism of historical materialism to it as well (which is true of a lot of economic models if we’re being honest…). But it forced people to think of things in a new way. Energy economics is not something that you’re tempted to think about when considering pre-industrial societies, for example.
These models don’t really have predictive value and they probably can’t ever gain one. But they do have an exploratory value. They may not be able to tell you what will happen tomorrow, but they can help you think about what’s happening today in more ways than one, from more angles, and considering more factors, and possibly understand it better.
That’s something historians don’t do anymore. There was a period where people tried to predict the future development of history, and then the whole discipline gave up. It’s a bit like what we are witnessing in the Economics field: there are strong calls to stop attributing predictive value to macroeconomic models because after a certain scale, they are just over-fitting to existing patterns, and they fail miserably after a few years.
Well, history is not math, right? It’s a way of writing a story backed by a certain amount of evidence. You can use a historical model to make predictions, sure, but the act of prediction itself causes changes.
(OP here.) I totally agree, and this is something I didn’t explore in my essay. Tainter doesn’t see complexity as always a problem: at first, it brings benefits! That’s why people do it. But there are diminishing returns and maintenance costs that start to outstrip the marginal benefits.
Maybe one way this could apply to software: imagine I have a simple system, just a stateless input/output. I can add a caching layer in front, which could win a huge performance improvement. But now I have to think about cache invalidation, cache size, cache expiry, etc. Suddenly there are a lot more moving parts to understand and maintain in the future. And the next performance improvement will probably not be anywhere near as big, but it will require more work because you have to understand the existing system first.
In Tainter’s view, a society of subsistence farmers, where everyone grows their own crops, makes their own tools, teaches their own children, etc. is not very complex. Add a blacksmith (division of labour) to that society, and you gain efficiency, but introduce complexity.
China’s “socialism” revolves around state-owned enterprises in a market economy. They’re pretty capitalist, even Chinese schools teach the Chinese system of government as a Hegelian derivation of socialism and capitalism.
What distinguishes capitalism as a system is that profit is the decisive and ultimate factor around which economic activity is organized. China’s system makes use of markets and private enterprise, but it is ultimately planned and organized around social ends (see: the aforementioned poverty alleviation).
In China they describe their current system as the lower stage of socialism, but yes they’ve developed it in part based on insights into the contradictions of earlier socialist projects.
Another, less charitable, way of looking at it: the Chinese Government is unwilling to relinquish power, but discovered through the starvation and murder of 45 million of their own people that mixed economies are less bad than planned economies.
Yeah, I used to believe all that too. But eventually I got curious about what people on the other side of the argument could possibly have to say, and much to my surprise I found they had stronger arguments and a more serious commitment to truth. Then I realized that the people pushing those lines I believed were aligned with the people pushing all sorts of anti-human ideologies like degrowth.
“Government willing to relinquish power” is a sufficiently low-half-life, unstable-state-of-being that the average number in existence at any given time is zero. What information does referencing it add?
Ah, fair - I was referring to the politicians (theoretically) in charge of the civil service. I’m intrigued by where you’re going with this, though … are you concerned about the efficacy of changing the 0.1% even in the case of democratically elected Governments?
To my mind, long-term stability is the key practical advantage of constitutional democracies as a form of government.
Dictatorships change less frequently, and churn far more of the government when they do. Single-party rule is subject to sudden, massive policy reversals.
Stability (knowing how the rules can change over time, and how they can’t) is what makes them desirable places for the wealthy to live and invest, which makes larger capital works possible.
Right so to paraphrase - you don’t see the replacement of politicians by democratic means as likely to effect significant change, but also, you see that as a feature not a bug?
Essentially, yes. Significant changes would imply that the voters have drastically changed their minds in a short time, which essentially never happens. The set of changes is also restricted (eg no retrospective crimes, restrictions on asset seizure).
Encourage taking “our world in data” charts with a grain of salt when considering fossil fuel dependence (and our future), planetary boundaries framework losses (notably biodiversity), etc.
Hunter-gatherer societies also ran up against the limitations of their mode of relating to the environment. A paradigm shift in this relationship opened up new horizons for growth and development.
If we’ve reached similar environmental limits then the solution is a similar advancement to a higher mode, not “degrowth” (an ideology whose most severe ramifications will inevitably fall upon the people who are struggling the most already).
“10. Income and consumption does not tell us the whole story about poverty. Poverty is multi-dimensional, and some aspects of human well-being can be obscured by consumption figures.”
“11. The present rate of poverty reduction is too slow for us to end $1.90/day poverty by 2030, or $7.40/day poverty in our lifetimes. To achieve this goal, we would need to change economic policy to make it fairer for the world’s majority. We will also need to respond to the growing crisis of climate change and ecological breakdown, which threatens the gains we have made.”
“12. Ultimately, the more morally relevant metric is not proportions or absolute numbers, but rather the extent of poverty vis-a-vis our capacity to end it. By this metric, the world has much to do—perhaps more than ever before.”
Why would anybody fund somebody else’s vanity project when they could use the money to fun their own vanity project?
Because you find value in it. The same reason people pay subscriptions to Netflix or their favorite YouTuber, or have subscriptions to Patreon’s of game modders or anyone else.
The world does not owe anybody a living. Be thankful for having the resources to spend time on a vanity project.
Where does this sentiment come from? I didn’t read anything about anyone owing anyone anything in the linked post.
Can you define “vanity project” here? It seems you are making a value judgment, the phrase implies that such projects have little value aside from stroking one’s ego. I wonder what has value, in your eyes.
Are you saying that because computer languages already exist, there is no value to having new languages?
Do humans already communicate perfectly with computers? Do computers perfectly meet humanity’s needs? Are computer programs free of bugs and vulnerabilities? Are all programs fast and efficient, user-friendly, and easy+quick to develop properly? Is there no room for improvements over existing languages that might help address these issues?
A major way to have a software project create a steady income flow is to get companies on board (they’re much less cost sensitive than individual users) but pulling the rug under their feet is a sure way to make sure that this won’t happen.
So for elm specifically, I think “vanity project” is an apt description.
Agreed, and “getting companies on board” doesn’t necessarily mean compromising design decisions like he describes. If people are willing to invest in your alternative language that means that they largely agree with your design principles and values. But it does mean providing the kinds of affordances and guarantees that allow an organization to be in control of their own destiny and engineer a robust and maintainable system. Elm has had almost no energy invested into these concerns.
I see nothing wrong with a project whose purpose is enjoyment, that includes some amount of stroking of ego.
Finding out which language features have the greatest amount of some desirable characteristic requires running experiments. I’m all for running experiments to see what is best (however best might be defined).
Creating a new language and claiming it has this, that or the other desirable characteristics, when there is no evidence to back up the claims, is proof by ego and bluster (this is a reply to skyfaller’s question, not a statement about the linked to post; there may be other posts that make claims about Elm).
How would a person establish any evidence regarding a new language without first designing and creating that new language? I agree that evidence for claims is desirable, but your original comment seems to declare all new language design to be vanity (i.e. only good for ego-stroking), and that’s a position that requires evidence as well. Just because a language has not yet proven its value does not mean it has no value. Reserving judgment until you can see some results seems a more prudent tactic than, well, prejudice.
How do you work out which features are best if the ones you’re trying don’t exist yet? Wouldn’t that require designing and implementing them and then let people use them?
To be able to design/implement a language feature that does not yet exist, somebody needs to review all existing languages to build a catalogue of existing features; or, consult such a catalogue if it already existed.
I don’t know of the existence of such a catalogue, pointers welcome.
Do you know of any language designer who did much more than using their existing knowledge of languages?
You wouldn’t have to know all existing language features to invent a new approach, and the only way to test a new approach would be to build it and let people use it.
I think I’m lost as to where your argument is headed.
Why would anybody fund somebody else’s vanity project when they could use the money to fun their own vanity project?
Because they realise that there’s greater benefit in them having the other project with increased investment than in their own project. The invisible hand directs them to the most efficient use of resources.
Because they realise an absolute advantage the other project has in producing a useful outcome, and choose to benefit from that advantage.
Because they are altruists who see someone doing something interesting and decide to chip in.
Nice trick with that Enum.sort_by/2. I used tuple in my solution to not need to handle item, nil as a separate case. I thought about using pattern match in the closure, but I thought that separate helper function will be clearer.
You should care about privacy because privacy isn’t secrecy. I know what you do in the toilet, but that doesn’t mean you don’t want to close the door when you go in the stall.
I wonder if he has decided that writing a better low level programming language might be a more significant undertaking than he thought, especially if he hopes to primarily program video games…
Who knows. In the episode, Jonathan Blow doesn’t appear to indicate that he sees Rust solving his specific problems. I appreciated all the lamentations and the insights and the ranting. After 20 years I feel every pain point they ranted on.
I agree with a lot of the points he makes, but testing is the fly in the ointment. It’s much harder to test a 200 line function as compared to a couple smaller functions.
I use this style all the time for batch processing glue code that’s not easy to unit test anyway. It makes sense to limit the scope of variables as much as possible. I regularly promote variables to higher levels of scope than what I initially predicted when they’re heavily used. It’s cleaner, and easier to refactor than threading unrelated state values in and out of multiple functions with awkwardly constructed structs or tuples.
He’s not talking about pure functions, where a granular separation of functionality improves testability, but rather cases where the program coordinates many stateful calls. Unit tests of functions split out from that kind of procedure don’t actually tell you much about the correctness of the program and generally become dead-weight change-detectors.
I agree that change-detector tests are worthless. I guess if there are no pieces that can be separated out as pure functions then yes, inlining makes a lot more sense.
First of all, I would argue that you can’t do engineering without understanding your company’s business. A software engineer has to balance lots of different factors when building a system, but the one factor that cannot be compromised is the amount of time and/or money that your organization can afford to spend on a given system in order to be sustainable. I agree this understanding is important, but it has very little to do with marketing.
Secondly, there is a kind of marketing which is just finding a way to inform potential customers about your product and explaining how it could help them. I think you’d be hard-pressed to find anyone who thinks this is evil. Then there is a whole other class of activity also called marketing which is varying degrees of manipulative, dishonest, and ineffectual make-work (see: most of the ad-tech industry). I think you’d be hard-pressed to argue that these activities aren’t evil without resorting to nihilism.
It has a grandiose claim and and tries to attach itself to a well-respected coding standard, but it smells like a post-hoc justification for the unpalatable state of the code.
The code looks like a state machine. And a state machine can be written either as a spaghetti code of ifs, with omg-space-shuttle-will-crash-if-you-forget-an-else fear, or as a table with state transitions, which by construction ensures everything is accounted for.
I’ve the feeling that most developers (with a CS degree) forgot about this. And on the other side, there’s also a big component of lack of education on the topic. I’m not sure how many common programmers have been instructed or invested time learning about state machines.
That said, it is definitely a good mentoring/training topic. I think it will be well received by my team, and in any case should start circulating the knowledge more. Does anyone have good resources on this?
I read this and thought, well, could you unit test this code to ensure correctness? I know unit testing threading behavior is tough, but if this is space shuttle levels of risk, might that effort be worth it?
Nope. Not buying it. This is cheesy schtick covering up some very questionable coding practices.
I am a late stage beginning programmer struggling towards journeyman, and even I must ask “Why not AT LEAST use methods to collapse some of these 10 level deep conditional nests?”.
Good software engineering practice strives to keep code easy to reason about and thus more readable and maintainable. As much as we all love to be entertained by seeing HERE BE DRAGONS in source code, nobody actually thinks this is a GOOD idea.
This is an invitation to deviation of normalcy, and I can’t see any good at all coming out of it.
I think the received wisdom about small functions and methods has gotten somewhat muddled. The small functions style has become an aesthetic preference (which I adopted and still observe in myself) that is applied arbitrarily without any objective understanding of its effects.
For things that are actually functions in the mathematical sense (i.e., pure functions) a granular separation of functionality simplifies testing and composition. But procedures that mutate state or coordinate multiple stateful systems are not testable and composable in the same way. In this context, the small functions/methods style is actually an obstacle to understanding the system and ensuring correctness.
I think the received wisdom about small functions and methods has gotten somewhat muddled. The small functions style has become an aesthetic preference (which I adopted and still observe in myself) that is applied arbitrarily without any objective understanding of its effects.
Again, I am but an acolyte, but from my super simplistic perspective, having 8 levels of conditional nesting makes code SUPER hard to reason about, and when you break that into methods that signal the intent of the code contained within you increase readability.
I guess I’d thought of that as beyond argument. I’ll read the Carmack article, thanks for the link.
Yeah that’s definitely the accepted dogma but I’ve observed the opposite in large systems I’ve worked on (although it took a while for me to see it). If you look at game engines, which do some of the most complex coordination of stateful systems anywhere, you will see the large procedure/nested conditional style. This doesn’t come from an ignorance of the ability to make small methods.
The intent communicated by factoring code into small methods is that these calls can be rearranged and reused at will, but for stateful calls this most often isn’t true.
I can also imagine that in game engines simply eating the overhead induced by a method call (stack, heap, etc.) could be problematic.
Lesson for me here is that there are almost no hard and fast rules where code is concerned, but I still think that for the class of very non computationally intensive process control (Devops) work I do, having small, readable, clearly named methods rather than giant nesting structures is a best practice I’ll stand by.
Multithreading usually requires a bit more programming work to distribute tasks properly, but hey, this is Tesla we’re talking about — it’s probably a piece of cake for the company.
I don’t think that thread says anything about the expertise of the team that would have to implement multithreaded code, or anything about the overall level of development expertise at Tesla, really. If you’ve worked in software for a while, you should have plenty of stories like that yourself. (If you don’t, I contend you’ve been unusually lucky with your choice of employers.)
There is a somewhat qualitative difference between a phone switch crashing and a car suddenly unable to steer or brake going through a schoolzone and crashing.
I don’t mind reaping child processes in my programs, but I’d prefer my sedan not duplicate my behavior.
If you mean the part where the computer runs a bunch of nasty heuristics to convert camera pictures and radar scans into second-by-second actions, don’t systems like TensorFlow normally use SIMD or the GPU for parallelism rather than threads, to avoid the overhead of cache coherency and context switching? When your tolerance for latency is that low, you do not use Erlang.
If you mean the part where you use map and traffic data to do your overall route, I don’t think you need to be that fast. You’re spending most of your time waiting on the database and network, and could probably use Erlang just fine. The important part is the fast self-driving heuristics system cannot block on the slower mapping system. The driving system needs to send out a request for data, and keep driving even if the mapping system doesn’t respond right away.
I was being facetious, really. You wouldn’t run BEAM on the AP computer; it’s not meant for that kind of data crunching.
It is my understanding that the MCU is the middleman between the AP computer and the car’s hardware – this is how it also applies firmware updates to various parts of the car.
So I would write the AP control plane in Erlang/Elixir for extremely reliable low latency message passing. I expect the MCU is receiving values from the AP hardware which it acts upon by sending the signals to the car’s hardware. This also means it’s extremely unlikely to crash the process from bad data coming from the AP computer.
This is a guess based on what I’ve seen inside the MCU, but haven’t bothered digging too deep.
I’m also confused about why you think Erlang is not low latency?
The language that’s designed for safe multithreading and high performance is Rust. BEAM languages wouldn’t provide acceptable performance for this use-case.
The languages used in successful projects in safety-critical field had no formal spec. That’s mostly C and assembly with some Ada, C++, and Java. So, Rust would probably be an improvement unless it was a group throwing every verification tool they can at their C code. It has the most such tools.
To be fair Ada has a pretty decent specification and SPARK/Ada probably has the most usable verification tools for industrial usage today, as long as you want specifications that are more expressive than what your type-system can capture. The Rust system may be very good at catching ownership-related mistakes, but there still currently exists no automated tools to verify that, say, a function that claims to be sorting data actually returns a sorted result.
You’re right in that Ada/SPARK can get further in correctness. Most in safety-critical systems use C subsets with no formal methods, though. There’s lots of review, lots of testing, and recently more use of automated analyzers.
Even so, Ada still has Rust beat on that given there’s more tooling for analyzing and testing it. C has even more than Ada.
No, but I wonder how this works in relation to JVM / BEAM. Is the formal spec really about the specific language or is the behavior of the VM sufficient? I’m not aware of different JVM or BEAM languages being able to do things that are impossible in Java/Erlang.
Need more info, but it’s interesting to think about.
Summary: author’s expectations of a young language exceed the actual implementation, so they write a Medium article.
If you can’t tell: slightly triggering article for me, and I don’t use/advocate for Elm. I’d much prefer if the author either pitched in and helped, or shrugged and moved on to something else. Somehow, yelling into the void about it is worse to me, I think because there are one or two good points in there sandwiched between non-constructive criticisms.
The article provides valuable information for people considering using Elm in production. The official line on whether Elm is production ready or not is not at all clear, and a lot of people suggest using it.
I read “fastest” and “safest” as referring to “how fast can I can get work done” and “is this language a safe bet”, not fast and safe in the sense of performance. If that’s the right interpretation, then those conclusions flow naturally from the observations he makes in the article.
Right, the author made the same clarification to me on Twitter, so that’s definitely what he meant. In that sense, the conclusion is fine. Those are very ambiguous words though (I took them to mean “fastest runtime performance” and “least amount of runtime errors”).
TBF, I was a little too snarky in my take. I don’t want to shutdown legitimate criticism.
The official line on whether Elm is production ready or not is not at all clear, and a lot of people suggest using it.
That ambiguity is a problem. There’s also a chicken/egg problem with regard to marketing when discussing whether something is production ready. I’m not sure what the answer is.
It’s even more ambiguous for Elm. There are dozens of 100K+ line commercial code bases out there. How many should there be before the language is “production ready”? Clearly, for all those companies, it already is.
Perhaps the question is misguided and has reached “no true Scotsman” territory.
That’s one reason why this topic is touchy to me: things are never ready until the Medium-esque blogosphere spontaneously decides it is ready, and then, without a single ounce of discontinuity, everyone pretends like they’ve always loved Elm, and they’re excited to pitch in and put forth the blood, sweat, and tears necessary to make a healthy, growing ecosystem. Social coding, indeed.
In a sense, everyone wants to bet on a winner, be early, and still bet with the crowd. You can’t have all those things.
I like your last paragraph. When I think about it, I try to reach the same impossible balance when choosing technologies.
I even wrote a similar post about Cordova once (“is it good? is it bad?”). Hopefully it was a bit more considered as I’d used it for 4 years before posting.
The thing that bothers me with the developer crowd is somewhat different, I think. It’s the attempt to mix the other two unmixable things. On one hand, there’s the consumerist attitude to choosing technologies (“Does it work for me right now? Is it better, faster, cheaper than the other options?”). On the other hand, there are demands for all the benefits of open source like total transparency, merging your PR, and getting your favourite features implemented. Would anyone demand this of proprietary software vendors?
I’m not even on the core Elm team, I’m only involved in popularising Elm and expanding the ecosystem a bit, but even for me this attitude is starting to get a bit annoying. I imagine it’s worse for the core team.
Hey, thanks for your work on Elm. I’m much less involved than you, but even I find the “walled garden” complaints a little irritating. I mean, if you don’t like this walled garden, there are plenty of haphazard dumping grounds out there to play in, and even more barren desert. Nobody’s forcing anybody to use Elm! For what it’s worth, I think Evan and the Elm core team is doing great work. I’m looking forward to Elm 1.0, and I hope they take their time and really nail it.
The author of this article isn’t pretending to be an authority on readiness, and claiming that they’ll bandwagon is unwarranted. This article is from someone who was burned by Elm and is sharing their pain in the hopes that other people don’t get in over their heads.
Being tribal, vilifying the “Medium-esque blogosphere” for acts that the author didn’t even commit, and undermining their legitimate criticisms with “well, some people sure do love to complain!” is harmful.
I’d like to push back on this. What is “production ready”, exactly? Like I said in another comment, there are dozens of 100K+ line commercial Elm code bases out there. Clearly, for all those companies, it already is.
I’ve used a lot of other technologies in production which could easily be considered “not production ready”: CoffeeScript, Cordova, jQuery Mobile, Mapbox. The list goes on. They all had shortcomings, and sometimes I even had to make compromises in terms of requirements because I just couldn’t make particular things work.
The point is, it either works in your particular situation, or it doesn’t. The question is meaningless.
Here are my somewhat disjoint thoughts on the topic before the coffee has had a chance to kick in.
What is “production ready”, exactly?
At a minimum, the language shouldn’t make major changes between releases that require libraries and codebases to be reworked. If it’s not at a point where it can guarantee such a thing, then it should state that fact up front. Instead, its creator and its community heavily promote it as being the best thing since sliced bread (“a delightful language for reliable webapps”) without any mention of the problems described in this post. New folks take this to be true and start investing time into the language, often quite a lot of time since the time span between releases is so large. By the time a new release comes out and changes major parts of the language, some of those people will have invested so much time and effort into the language that the notion of upgrading (100K+ line codebases, as you put it) becomes downright depressing. Not to mention that most of those large codebases will have dependencies that themselves will need upgrading or, in some cases, will be have to be deprecated (as elm-community has done for most of my libraries with the release of 0.19, for example).
By promoting the language without mentioning how unstable it really is, I think you are all doing it a disservice. Something that should be perceived as good, like a new release that improves the language, ends up being perceived as a bad thing by a large number of the community and so they leave with a bad taste in their mouth – OP made a blog post about it, but I would bet the vast majority of people just leave silently. You rarely see this effect in communities surrounding other young programming languages and I would posit that it’s exactly because of how they market themselves compared to Elm.
Of course, in some cases it can’t be helped. Some folks are incentivized to keep promoting the language. For instance, you have written a book titled “Practical Elm” so you are incentivized to promote the language as such. The more new people who are interested in the language, the more potential buyers you have or the more famous you become. I believe your motivation for writing that book was pure and no one’s going to get rich off of a book on Elm. But, my point is that you are more bought into the language that others normally are.
sometimes I even had to make compromises in terms of requirements because I just couldn’t make particular things work.
That is the very definition of not-production-ready, isn’t it?
Disclaimer: I quit Elm around the release of 0.18 (or was it 0.17??) due to a distaste for Evan’s leadership style. I wrote a lot of Elm code (1234 and others) and put some of it in production. The latter was a mistake and I regret having put that burden on my team at the time.
From what I’ve seen, many people reported good experiences with upgrading to Elm 0.19. Elm goes further than many languages by automating some of the upgrades with elm-upgrade.
FWIW, I would also prefer more transparency about Elm development. I had to scramble to update my book when Elm 0.19 came out. However, not for a second I’m going to believe that I’m entitled to transparency, or that it was somehow promised to me.
To your other point about marketing, if people are making decisions about putting Elm into production based on its tagline, well… that’s just bizarre. For example, I remember looking at React Native in its early stages, and I don’t recall any extensive disclaimers about its capabilities or lack thereof. It was my responsibility to do that research - again, because limitations for one project are a complete non-issue for another project. There’s just no one-size-fits-all.
Finally, calling Elm “unstable” is simply baseless and just as unhelpful as the misleading marketing you allege. I get that you’re upset by how things turned out, but can’t we all have a discussion without exaggerated rhetoric?
That is the very definition of not-production-ready, isn’t it?
Exactly my point: there is no such definition. All those technologies I mentioned were widely used at the time. I put them into production too, and it was a good choice despite the limitations.
From what I’ve seen, many people reported good experiences with upgrading to Elm 0.19. Elm goes further than many languages by automating some of the upgrades with elm-upgrade.
And that’s great! The issue is the things that cannot be upgraded. Let’s take elm-combine (or parser-combinators as it was renamed to), for example. If you depended on the library in 0.18, then, barring the invention of AGI, there’s no automated tool that can help you upgrade because your code will have to be rewritten to use a different library because elm-combine cannot be ported to 0.19 (not strictly true, because it can be ported but only by the core team, but my point still stands because it won’t be). Language churn causes ecosystem churn which, in turn, causes pain for application developers so I don’t think it’s a surprise that folks get angry and leave the community when this happens given that they may not have had any prior warning before they invested their time and effort.
Finally, calling Elm “unstable” is simply baseless and just as unhelpful as the misleading marketing you allege. I get that you’re upset by how things turned out, but can’t we all have a discussion without exaggerated rhetoric?
I don’t think it’s an exaggeration to call a language with breaking changes between releases unstable. To be completely honest, I can’t think of a better word to use in this case. Fluctuating? In flux? Under development? Subject to change? All of those fit and are basically synonymous to “unstable”. None of them are highlighted anywhere the language markets itself, nor by its proponents. I’m not making a judgement on the quality of the language when I say this. I’m making a judgement on how likely it is to be a good choice in a production environment, which brings me to…
Exactly my point: there is no such definition. All those technologies I mentioned were widely used at the time. I put them into production too, and it was a good choice despite the limitations.
They were not good choices, because, by your own admission, you were unable to meet your requirements by using them. Hence, they were not production-ready. Had you been able to meet your requirements and then been forced to make changes to keep up with them, then that would also mean they were not production-ready. From this we have a pretty good definition: production-readiness is inversely proportional to the likelihood that you will “have a bad time” after putting the thing into production. The more that likelihood approaches 0, the more production-ready a thing is. Being forced to spend time to keep up with changes to the language and its ecosystem is “having a bad time” in my book.
I understand that our line of work essentially entails us constantly fighting entropy and that, as things progress, it becomes harder and harder for them maintain backwards-compatibility but that doesn’t mean that nothing means anything anymore or that we can’t reason about the likelihood that something is going to bite us in the butt later on. From a business perspective, the more likely something is to change after you use it, the larger risk it poses. The more risks you take on, the more likely you are to fail.
I think your definition is totally unworkable. You’re claiming that technologies used in thousands upon thousands of projects were not production ready. Good luck with finding anything production ready then!
I’ve been working with Clojure for almost a decade now, and I’ve never had to rewrite a line of my code in production when upgrading to newer versions because Cognitect takes backwards compatibility seriously. I worked with Java for about a decade before that, and it’s exact same story. There are plenty of languages that provide a stable foundation that’s not going to keep changing from under you.
I am stating that being able to put something in production is different from said thing being production ready. You claim that there is no such thing as “production ready” because you can deploy anything which is a reduction to absurdity of the situation. Putting something into production and being successful with it does not necessarily make it production ready. It’s how repeatable that success is that does.
It doesn’t look like we’re going to get anywhere past this point so I’m going to leave it at that. Thank you for engaging and discussing this with me!
Thank you as well. As I said in another comment, this is the first time I tried having an extended discussion in the comments in here, and it hasn’t been very useful. Somehow we all end up talking past each other. It’s unfortunate. In a weird way, maybe it’s because we can’t interrupt each other mid-sentence and go “Hang on, but what about?…”. I don’t know.
This doesn’t respond to bogdan’s definition in good faith.
production-readiness is inversely proportional to the likelihood that you will “have a bad time” after putting the thing into production. The more that likelihood approaches 0, the more production-ready a thing is.
In response to your criticisms, bogdan proposed a scale of production-readiness. This means that there is no such distinction between “production-ready” and not “production-ready”. Elm is lower on this scale than most advocates imply, and the article in question provides supporting evidence for elm being fairly low on this scale.
Frankly, I don’t really want to have a discussion with you. I’m calling you out because you were responding in bad faith. You didn’t address any of his actual points, and you dismissed his argument condescendingly. The one point you did address is one that wasn’t made, and wasn’t even consistent with bogdan’s stance.
I disagree that the question is meaningless just because it has a subjective aspect to it. A technology stack is a long term investment, and it’s important to have an idea how volatile it’s going to be. For example, changes like the removal the of the ability to do interop with Js even in your own projects clearly came as a surprise to a lot of users. To me a language being production ready means that it’s at the point where things have mostly settled down, and there won’t be frequent breaking changes going forward.
By this definition, Python wasn’t production ready long after the release of Python 3. What is “frequent” for breaking changes? For some people it’s 3 months, for others it’s 10 years. It’s not a practical criterion.
Even more interestingly, Elm has been a lot less volatile than most projects, so it’s production ready by your definition. Most people complain that it’s changing too slowly!
(Also, many people have a different perspective about the interop issue; it wasn’t a surprise. I don’t want to rehash all that though.)
Python wasn’t production ready long after the release of Python 3.
Python 3 was indeed not production-ready by many people’s standards (including mine and the core team’s based on the changes made around 3.2 and 3.3) after its release up until about version 3.4.
Even more interestingly, Elm has been a lot less volatile than most projects, so it’s production ready by your definition. Most people complain that it’s changing too slowly!
“it’s improving too slowly” is not the same as “it’s changing too slowly”.
By @Yogthos’s definition, neither Python 2 nor Python 3 were “production ready”. But if we’re going to write off a hugely popular language like that, we might as well write off the whole tech industry (granted, on many days that’s exactly how I feel).
Re Elm: again, by @Yogthos’s definition it’s perfectly production ready because it doesn’t make “frequent breaking changes”.
By @Yogthos’s definition, neither Python 2 nor Python 3 were “production ready”.
Python 2 and 3 became different languages at the split as evidenced by the fact that they were developed in parallel. Python 2 was production ready. Python 3 was not. The fact that we’re using numbers to qualify which language we’re talking about proves my point.
It took five years for Django to get ported to Python 3. (12)
Re Elm: again, by @Yogthos’s definition it’s perfectly production ready because it doesn’t make “frequent breaking changes”.
You’re hanging on the wording here and “frequent” is not as important to Yogthos’ argument as “breaking changes” is.
I think most people agree that Python 3 was quite problematic. Your whole argument seems to be that just because other languages have problems, you should just accept random breaking changes as a fact of life. I strongly disagree with that.
The changes around ecosystem access are a HUGE breaking change. Basically any company that invested in Elm and was doing Js interop is now in a really bad position. They either have to stay on 0.18, re-implement everything they’re using in Elm, or move to a different stack.
Again, as I noted there is subjectivity involved here. My standards for what constitutes something being production ready are different than yours apparently. That’s fine, but the information the article provides is precisely what I’d want to know about when making a decision of whether I’d want to invest into a particular piece of technology or not.
I don’t think you are really aware of the changes to Elm because you’re seriously overstating how bad they were (“re- implement everything” was never the case).
I agree that there is useful information in the article – in fact, I try to read critical articles first and foremost when choosing technologies so it’s useful to have them. I never said that we should accept “random breaking changes” either (and it isn’t fair to apply that to Elm).
I still don’t see that you have a working definition of “production ready” – your definition seems to consist of a set with a single occupant (Clojure).
As an aside, this is the first time I’ve had an extended discussion in the comments here on Lobsters, and it hasn’t been very useful. These things somehow always end up looking like everyone’s defending their entrenched position. I don’t even have an entrenched position – and I suspect you may not either. Yet here we are.
Perhaps I misunderstand the situation here. If a company has an Elm project in production that uses Js interop, what is the upgrade path to 0.19. Would you not have to rewrite any libraries from the NPM ecosystem in Elm?
I worked with Java for around a decade before Clojure, and it’s always been rock solid. The biggest change that’s happened was the introduction of modules in Java 9. I think that’s a pretty good track record. Erlang is another great example of a stack that’s rock solid, and I can name plenty of others. Frankly, it really surprises me how cavalier some developer communities regarding breaking changes and regressions.
Forum discussions are always tricky because we tend to use the same words, but we assign different meanings to them in our heads. A lot of the discussion tends to be around figuring out what each person understands when they say something.
In this case it sounds like we have different expectations for what to expect from production ready technology. I’m used to working with technologies where regressions are rare, and this necessarily colors my expectations. My views on technology adoption are likely more conservative than majority of developers.
Prior to the 0.19 release, there was a way to directly call JS functions from Elm by relying on a purely internal mechanism. Naturally, some people started doing this, despite repeated warnings that they really shouldn’t. It wasn’t widespread, to my knowledge.
All the way in 2017, a full 17 months before 0.19 release, it was announced that this mechanism would be removed. It was announced again 5 months before the release.
Of course, a few people got upset and, instead of finding a migration path, complained everywhere they could. I think one guy wrote a whole UI framework based on the hack, so predictably he stomped out of the community.
There is an actual JS interop mechanism in Elm called ports. Anybody who used this in 0.18 (as they should have) could continue using it unchanged in 0.19. You can use ports to integrate the vast majority of JS libraries with Elm. There is no need to rewrite all JavaScript in Elm. However, ports are asynchronous and require marshalling data, which is why some people chose to use the internal shortcut (aka hack) instead.
So, if a company was using ports to interop with JS, there would be no change with 0.19. If it was using the hack, it would have to rewrite that portion of the code to use ports, or custom elements or whatever – but the rework would be limited to bindings, not whole JS libraries.
There were a few other breaking changes, like removing custom operators. However, Elm has a tool called elm-upgrade which helps to identify these and automatically update code where possible.
There were also fairly significant changes to the standard library, but I don’t think they were any more onerous than some of the Rails releases, for example.
Now, regarding your “rock solid” examples by which I think you mean no breaking changes. If it’s achievable, that’s good – I’m all for it. However, as a counterexample, I’ll bring up C++ which tied itself into knots by never breaking backward compatibility. It’s a mess.
I place less value on backward compatibility than you do. I generally think that backward compatibility ultimately brings software projects down. Therefore, de-prioritising it is a safer bet for ensuring the longevity of the technology.
Is it possible that there are technologies which start out on such a solid foundation that they don’t get bogged down? Perhaps – you bring up Clojure and Erlang. I think Elm’s core team is also trying to find that kind of foundation.
But whether Elm is still building up towards maturity or its core team simply has a different philosophy regarding backward compatibility, I think it’s at least very clear that that’s how it is if you spend any time researching it. So my view is that anybody who complains about it now has failed to do their research before putting it into production.
I feel like you’re glossing over the changes from native modules to using ports. For example, native modules allowed exposing external functions as Tasks allowing them to be composed. Creating Tasks also allows for making synchronous calls that return a Task Never a which is obviously useful.
On the other hand, ports can’t be composed like Tasks, and as you note can’t be used to call synchronous code which is quite the limitation in my opinion. If you’re working with a math library then having to convert the API to async pub/sub calls is just a mess even if it is technically possible to do.
To sum up, people weren’t just using native modules because they were just completely irresponsible and looking to shoot themselves in a foot as you seem to be implying. Being able to easily leverage existing ecosystem obviously saves development time, so it’s not exactly surprising that people started using native modules. Once you have a big project in production it’s not trivial to go and rewrite all your interop in 5 months because you have actual business requirements to work on. I’ve certainly never been in a situation where I could just stop all development and go refactor my code as long as I wanted.
This is precisely the kind of thing I mean when I talk about languages being production ready. How much time can I expect to be spending chasing changes in the language as opposed to solving business problems. The more breaking changes there are the bigger the cost to the business is.
I’m also really struggling to follow your argument regarding things like Rails or C++ to be honest. I don’t see these as justifying unreliable tools, but rather as examples of languages with high maintenance overhead. These are technologies that I would not personally work with.
I strongly disagree with the notion that backwards compatibility is something that is not desirable in tooling that’s meant to be used in production, and I’ve certainly never seen it bring any software projects down. I have however seen plenty of projects being brought down by brittle tooling and regressions.
I view such tools as being high risk because you end up spending time chasing changes in the tooling as opposed to solving business problems. I think that there needs to be a very strong justification for using these kinds of tools over ones that are stable.
The question isn’t even close to meaningless… Classifying something as “production ready” means that it is either stable enough to rely on, or is easily swapped out in the event of breakage or deprecation. The article does a good enough job of covering aspects of elm that preclude it from satisfying those conditions, and it rightly warns people who may have been swept up by the hype around elm.
Elm has poor Interop, and is (intentionally) a distinct ecosystem from JS. This means that if Elm removes features you use, you’re screwed. So, for a technology like Elm (which is a replacement of JS rather than an enhancement) to be “production ready” it has to have a very high degree of stability, or at least long term support for deprecated features. Elm clearly doesn’t have this, which is fine, but early adopters should be warned of the risks and drawbacks in great detail.
Let’s keep it really simple, to me ‘production-ready’ is when the project version gets bumped to 1.0+. This is a pretty established norm in the software industry and usually a pretty good rule of thumb to judge by. In fact Elm packages enforce semantic versioning, so if you extrapolate that to Elm itself you inevitably come to the conclusion that hasn’t reached production-release readiness yet.
The term “production ready” is itself not at all clear. Some Elm projects are doing just fine in production and have been for years now. Some others flounder or fail. Like many things, it’s a good fit for some devs and some projects, and not for some others – sometimes for reasons that have little to do with the language or its ecosystem per se. In my (quite enjoyable!) experience with Elm, both official and unofficial marketing/docs/advocates have been pretty clear on that; but developers who can’t or won’t perceive nuance and make their own assessments for their own needs are likely to be frustrated, and not just with Elm.
I agree that there’s valuable information in this article. I just wish it was a bit less FUDdy and more had more technical detail.
I think there’s an angle to Elm’s marketing that justifies these kinds of responses: Those “author’s expectations” are very much encouraged by the way the Elm team presents their language.
Which criticisms do you find unfair, which are the good points?
think there’s an angle to Elm’s marketing that justifies these kinds of responses
I’m sympathetic to both Elm and the author here. I understand Elm’s marketing stance because they ask devs to give up freely mixing pure/impure code everywhere in their codebase on top of a new language and ecosystem. (In general, OSS’s perceived need for marketing is pretty out of hand at this point and a bit antithetical to what attracts me to it in the first place). OTOH it shouldn’t be possible to cause a runtime error in the way the author described, so that’s a problem. I’d have wanted to see more technical details on how that occurred, because it sounded like something that type safety should have protected him from.
Fair criticisms:
Centralized ecosystem (though this is by design right now as I understand)
Centralized package repo
Official docs out of date and incomplete
Unfair criticisms:
PRs being open after 2 years: one example alone is not compelling
Tutorials being out of date: unfortunate, but the “Cambrian explosion” meme from JS-land was an implicit acknowledgement that bitrot was okay as long as it was fueled by megacorps shiny new OSS libs, so this point is incongruous to me (even if he agrees with me on this)
“Less-popular thing isn’t popular, therefore it’s not as good”: I understand this but also get triggered by this; if you want safe, established platforms that have a big ecosystem then a pre-1.0 language is probably not the place to be investing time
The conclusion gets a little too emotional for my taste.
Thanks for the detailed reply; the criticism of the article seems valid.
(As a minor point, the “PRs being open” criticism didn’t strike me as unsubstantiated because I’ve had enough similar experiences myself, but I can see how the article doesn’t argue that well. Certainly I’ve felt that it would be more honest/helpful for elm to not accept github issues/prs, or put a heavy disclaimer there that they’re unlikely to react promptly, and usually prefer to fix things their own way eventually.)
A lot of the things listed in the articles are things that have been explicitly done to make things harder for contributions to happen. The development of Elm has explicitly made choices to make things harder, and not in a merely incidental way.
This isn’t “the language is young” (well except for the debug point), a lot of this is “the language’s values go against things useful for people deploying to production”)
I don’t know, other than the point about the inability to write native modules and the longstanding open PR’s, all of the rest of the issues very much seem symptomatic of a young language.
The native module point sounds very concerning, but I don’t think I understand enough about elm or the ecosystem to know how concerning it is.
I’ve been vaguely following along with Elm, and the thng that makes me err on agreeing with this article is that the native module thing used to not be the case! It was removed! There was a semi-elegant way to handle interactions with existing code and it was removed.
There are “reasons”, but as someone who has a couple ugly hacks to keep a hybrid frontend + backend stack running nicely, I believe having those kinds of tricks are essential for bringing it into existing code bases. So seeing it get removed is a bit red flag for me.
I never relied on native modules, so I didn’t really miss them. But we now have ports, which I think is a much more principled (and interesting) solution. I felt that they worked pretty well for my own JS interop needs.
Stepping back a bit, if you require the ability do ugly hacks, Elm is probably not the right tool for the job. There are plenty of other options out there! I don’t expect Elm to be the best choice for every web front-end, but I do appreciate its thoughtful and coherent design. I’m happy to trade backward compatibility for that.
If you spend any amount of time in the Elm community you will find that contributions to the core projects are implicitly and explicitly discouraged in lots of different ways. Even criticisms of the core language and paradigms or core team decisions are heavily moderated on the official forums and subreddit.
Also how are we using the term “young”? In terms of calendar years and attention Elm is roughly on par with a language like Elixir. It’s probably younger in terms of developer time invested, but again this is a direct result of turning away eager contributors.
I think it’s fine for Elm to be a small project not intended for general production usage, but Evan and the core team have continually failed to communicate that intent.
I guess by now it’s useless to complain about how confusing it is that OCaml has two (three?) “standard” package managers; the ecosystem around the language is kind of infamous for having at least two of everything. I trust the community will eventually settle on the one that works the best. At least it looks like esy is compatible with opam libraries (though the reverse is not true), so it might have a good chance against opam.
Also this is kind of unrelated, but I’m really salty about ReasonML recommending JS’s camelCase over OCaml’s snake_case. This is one of the few rifts in the ecosystem that can’t really be fixed with time, and now every library that wants to play well with both OCaml and Reason/BS ecosystems will have to export an interface in snake_case and one in camelCase.
I second the choice to use JS’s camelCase for ReasonML as a salty/trigger point. It seems like a minor syntactic thing to make it more familiar for JS developers making the switch, but as someone who primarily writes Haskell for day job - camelCase is just less readable, IMO. Something I constantly am irritated that I even have to think about is casing acronyms consistently - which is avoided by snake_case or spinal-case - ie. runAWSCommand or runAwsCommand, setHTMLElement vs setHtmlElement - run_aws_command, set_html_element, etc.
The strangest thing for me is the “hey, there’s two mostly compatible syntaxes for this language we call ReasonML” but it’s mostly the same thing as Bucklescript from which we use the compiler anyway, except this, and this, and … oh and by the way, it’s all ocaml inside. What ?!
It’s not “better.” Yes, there are some cases where they’ve patched up some syntactic oddities in OCaml, but it’s mostly just change for the sake of being near JS.
Is it worth it? Depends. ReasonML and its team believe that OCaml failed to catch on because of syntax. If you agree, then yes, it’s worth it. And based on the meteoric rise I’ve seen on ReasonML, they may be right. That said, I believe, with good company, think OCaml didn’t catch on because it had two of everything, had really wonky package managers (and again, two of them), and still lacks a good multithreading story. In that case, no, the syntax just is change for no reason, and the only reason ReasonML is successful is because Facebook is involved.
I’m all for functional alternatives displacing JavaScript but my main frustration with ReasonML is that any niceities you gain from using it are outweighed by the fact that it’s just one more layer on top of an already complex, crufty, and idiosyncratic dev environment. I think that’s what’s holding OCaml back as much as anything else.
Some people seem to think that OCaml’s syntax is really ugly (I quite like it) and unreadable. I’m guessing they’re the same who complain about lisps having too many parenthesis.
ReasonML does fix a few pain points with OCaml’s syntax, mostly related semicolons (here, here, here), and supports JSX, but it also introduces some confusion with function call, variant constructor and tuple syntax (here, here, here) so it’s not really a net win IMO.
I think ReasonML was more of a rebranding effort than a solution to actual problems, and honestly it’s not even that bad if you disregard the casing. Dune picks up ReasonML files completely transparently so you can have a project with some files in ReasonML syntax and the rest in OCaml syntax. The only net negative part is the casing.
Esy and bsb are build orchestration tools, not package managers.
Esy is not OCaml-specific, it can e.g. include C++ projects as build dependencies. This is how Revery ( https://github.com/revery-ui/revery ) is being developed, for example. Esy also solves the problem of having to set up switches and pins for every project, with commensurate redundant rebuilds of everything. Instead, it maintains a warm build cache across all your projects.
Bsb specifically supports BuckleScript and lets it use npm packages. It effectively opens up the npm ecosystem to BuckleScript developers, something other OCaml tools don’t do (at least not yet).
Having ‘two of everything’ is usually a sign of a growing community, so it’s something I’m personally happy to see.
Re: casing, sure it’s a little annoying but if C/C++ developers can survive mixed-case codebases, hey, so can we.
“Semantic” and “HTML” don’t belong in the same sentence. HTML is a presentational markup — it describes things like headings and emphases and tables — and it was never really designed to carry meaning.
In a way it is disappointing that XSLT never took off, because then we could have served meaningful data through XML (which, for all its evils, is very easy to define, standardise and validate against schema definitions) and transform it into something pretty for humans using XSLT and then we wouldn’t have to worry so much about whether a11y devices or search engines can make sense of it.
Headings and emphases and tables describe semantic relationships. I’m not sure there are any presentational tags left in HTML5. Even <b> and <i> were redefined in terms of semantic usage.
In a way it is disappointing that XSLT never took off,
I actually worked on old IE app that was all in on xsl and xslt server and client side. Xslt is an abomination. It works great for simple stuff. Start adding namespaces and versions to the schema and it falls apart completely. Has to do with having to match input namespaces in your xslt for whatever xml input you’re given iirc. I recall we had to add a step to all our inputs to strip namespaces off tags.
If I’d understood XSLT better, I’d have made the DSL generate XSLT - the ruby-in-ruby DSL was a performance bottleneck we didn’t need.
The main problem I was trying to solve was ‘how do I encode several thousand similar rules, many of which are not yet known’. That’s a problem where the answer is basically always “create a new language”.
And since when having multiple repos implies using git submodules to handle them? In my experience, proper packaging and semantic versioning is what makes it easy to work with multiple repositories.
Of course that comes with additional bureaucracy, but it also fosters better separation of software components.
Sure, the mono-repository approach allows for a fast “single source of truth” lookup, but it comes with a high price, as soon as people will realize that they can also cut corners. Eventually it gets a pile of spaghetti.
(For the record, just in case you could not tell, I’ve got a strong bias towards the multi-repo, due to everyday mono-repository frustration.)
The flip side is with multi-repo you will amplify the Conway’s law mechanism where people tend to introduce new functionality in the lowest friction way possible. If it would be easier to do it all in one project that’s what will happen, even if it would be more appropriate to split the additions across multiple projects.
Introducing friction into your process won’t magically improve the skills and judgement of your team.
I once proposed an alternative to git-subtree that splits commits between projects at commit-time: http://www.mos6581.org/git_subtree_alternative. This should help handling of tightly-coupled repositoties, but requires client changes.
Each side of this debate classifies the other as zealous extremists (as only developers can!), but both of them miss the crux of the matter: Git and its accompanying ecosystem are not yet fit for the task of developing modern cloud-native applications.
So, let me get this straight: Because your source control system doesn’t have innate knowledge of the linkages between your software components, that means it’s not up to the task of developing modern “cloud native” (God that term makes me want to cringe) applications?
I think not. Git is an ugly duckling, its UX is horrible but the arguments the author makes are awfully weak.
IMO expecting your VCS to manage dependencies is a recipe for disaster. Use a language that understands some kind of module and manage your dependencies there using the infrastructure that language provides.
well said. I dislike Git too but for different reason - the source code is somewhat of a mess
a hodgepodge of C, Python, Perl and Shell scripts
Git is the perfect project for a rewrite in a modern language like Go, Rust, Nim or Julia. A single Git binary similar to Fossil would make adoption and deployment much better.
Arguments are indeed weak, as what is “cloud native”? However, I think he’s onto something – maybe the problem is not just Git, but everything around it as well? I mean, one could create a big giant monorepo in Git, but the rest of the tooling (CI especially) will still do the full checkout and won’t understand that there are different components. Monorepos make a lot of sense, however, it seems to me that we’re trying to use popular tools to tackle the problem they are not meant to solve (that is, Git being a full replacement for SVN/SVK/Perforce and handling monorepos).
I don’t personally think monorepos make a lot of sense, and I think multi-repos are the way to go. If each separate piece is its own project and you let the language’s packaging / dependency management system handle the rest, I don’t see the problem.
Examples I can think of where my point applies are Python, Ruby, Perl or Java. Unless maybe you’re using a language with no notion of packages and dependencies - C/C++ perhaps? I don’t see the issue.
The friction in coordinating branches and PRs across multiple repos has been an issue on every team I’ve worked on. Converting to a monorepo has been a massive improvement every time I’ve done it. Either you’ve used hugely different processes or you’ve never tried using a monorepo.
The friction in coordinating branches and PRs across multiple repos
That’s a symptom that the project is not split across the correct boundaries. This is not different from the monolith-vs-services issue.
Amazon is a good example of splitting a complex architecture. Each team runs one or very few services each with their repos. Services have versioned APIs and PRs across teams are not needed.
If you have a mature enough project such that every repo has a team and every team can stay in its own fiefdom then I imagine you don’t experience these issues as much.
But even so, the task of establishing and maintaining a coherent split between repos over the lifetime of a project is non-trivial in most cases. The multi-repo paradigm increases the friction of trying new arrangements and therefore any choices will tend to calcify, regardless of how good they are.
I’m speaking from the perspective of working on small to mid-sized teams, but large engineering organizations (like Amazon, although I don’t know about them specifically) are the ones who seem to gain the most benefit from monorepos. Uber’s recent SubmitQueue paper has a brief discussion of this with references.
That’s interesting. Every team I’ve ever worked on had its architecture segmented into services such that cross branches and PRs weren’t an issue since each service was kept separate.
The advantage of a monorepo is that a package can see all the packages depending on it. That means you can test with all users and even fix them in a single atomic commit.
The alternative in a large organisation is that you have release versions and you have to support/maintain older versions for quite some time because someone is still using them. Users have the integration effort whenever they update. In a monorepo this integration effort can be shifted to developer who changes the interface.
I don’t see how you could do continuous integration in a larger organization with multiple-repos. Continuous integration makes the company adapt faster (more agile with a lowercase a).
Even if you use a language that has good (or some) package support, breaking a project into packages is not always easy. Do it too soon, and it will be at the wrong abstraction boundary and get in the way of refactoring, and to correct you’ll have to either loose historic, or deal with importing/exporting, which ain’t fun.
But if all your packages/components are in a single repo, you’ll still might get the boundaries wrong, but the source control won’t get much in the way of fixing it.
100% on the surrounding tooling. CI tooling being based around Git means that a lot of it is super inflexible. We’ve ended up splitting repos just to get CI to do what we need it to do, and adding friction in surrounding processes.
A rethink of the ecosystem would be very interesting
What you would do is, instead of having that data being diverted to third-party servers that you have no control over, you would either set up your own server or pay for a service by a trusted third party to store that data yourself.
Peak silicon valley capitalism: dying because the doctors couldn’t access very important info about you because the server with that info was turned off because you didn’t pay for the hosting.
The stuff in a typical doctor’s office is not great, but I’d still take it over the average blockchain solution-in-search-of-a-problem any day of the week. The fundamental properties of a blockchain are the opposite of what you want for medical data. Blockchains have everything public and immutable by default and design. Medical data is private by law and must support corrections and errata. In fact, properly handling medical data often requires that you implement a time machine and be able to change history, then replay the new timeline forward.
Here’s an example: suppose there’s some ongoing treatment that requires documentation before claims on it can be paid, and the documentation doesn’t come in until after the first 4 claims. The first 4 claims would have been rejected, and now you have to rewind time, then replay those 4 claims and pay them.
Or say there’s a plan with a deductible: the first $500 of costs in the year are the patient’s responsibility, then the plan pays all claims after that. But a claim for something that happened early in the year doesn’t come in until later, after you think the deductible has been met. On many plans – including some of the US government-backed ones – you now have to start over, rewind time to the start of the year, and replay all the claims in chronological order, processing things according to what the deductible situation would have been if the claims had arrived in that order, and pull refunds from doctors you weren’t supposed to pay, order refunds to the patient from doctors who should have been paid by you, and reconcile the whole thing until the correct entities have paid the correct bills.
An append-only structure is fundamentally terrible at this unless you build a whole bunch of specialized stuff on top of it to treat later entries as addending, modifying or replacing earlier ones. And since at that point you’ve gone and built a mutable history structure on top of your immutable blockchain, why didn’t you just build the mutable history software in the first place and skip the blockchain? You’re not using it for any of the unique things it does.
And that’s just the technical/bureaucratic part of the problem. The social side of the problem is even worse. For example: sometimes it is incredibly important that a patient be able to scrub data out of their medical history, because that data is wrong and will influence or even prejudice doctors who see the patient in the future. Doctors who just ignore obvious symptoms and write down in the notes “it’s all in their head, refer to a psychiatrist” are depressingly common, and every future doctor will see those notes. When it turns out that doctor was wrong and there was a real problem, you do not want to have to fight with the next doctor who says “well, it’s here in your file that this was found to be psychosomatic”. You have to get that fixed, and it’s already hard enough to do without people introducing uncorrectable-by-design medical records (and no, merely putting a big “that doctor was wrong” addendum in the medical blockchain is not a real solution to this).
Compared to how much worse it could get with blockchain, the crappy hairballs of only-run-on-Windows-XP (or worse) software in a typical doctor’s office are downright pleasant.
this is the sort of thing I heard from Americans who work in health care when they reviewed the article ahead of time, yeah.
The big problem they flagged was data silos - lots of patient data trapped in systems that don’t talk to each other, and the ridiculous dificulty and expense of extracting your health record from your doctor (though passing your stuff to another doctor is apparently fine). You can see the blockchain pitch in there - “control your own data!” … not that it can offer a solution in practice.
though passing your stuff to another doctor is apparently fine
It absolutely is not, at least technically, unless both doctors happen to use the same EMR, in which case it’s merely painful; or, if you’re extremely lucky, the same instance of the same EMR (for instance, half the health care in eastern Massachusetts uses Mass General’s EMR), in which case the experience is basically reasonable. Otherwise, you end up with some of the most absurd bullshit imaginable, that makes mailing paper charts seem reasonable in comparison; the best I’ve heard is a mailed CD containing proprietary viewing software in order to send imaging.
Interestingly, while “patients should own their own data” is a nice pitch, it’s actually somewhat problematic in practice. Health care providers may need to share information about a patient that patient would object to or should be kept unaware of (for instance, if a patient has been violent towards providers in the past, that information absolutely must be conveyed to any future providers that see them); and, like all professionals, health care providers use a lot of jargon in order to communicate clearly and precisely, which tends to make the chart incomprehensible to laypeople.
In the US, HIPAA provides a right to your medical records, similar (but not identical) to what a European would be familiar with from the GDPR. The gist of it is that you can make a request to any medical provider who’s treated you, and they have 30 days from the time of the request to provide you with a copy of your records. There are some exceptions (the most common exception is therapists’ notes), but not many.
I would guess that a lot of people probably don’t know they have this right, and probably a lot of medical providers aren’t forthcoming about making sure patients really understand their rights (they have to provide a notice of their privacy-related policies in writing, but a written notice in legalese is not the same as genuine understanding). A bigger problem is just that most people aren’t really able to look at medical records in their “standard” form and understand what they’re seeing.
And like the other commenter points out, interoperability between medical providers is not great. HIPAA allows medical providers to share information for treatment purposes, though, and the rules produce results that sometimes seem odd to modern tech people (for example, in the US the medical industry relies heavily on fax for sharing documents, because it’s often both the technically and legally simplest way to do so).
Maybe I’m missing something, but examples you give are related to health insurance, not medical records per se – those are two different concerns that are related, but the latter can exist without the former. Medical records are immutable if they store facts, even wrong diagnoses – after all, how do you figure our that some diagnosis is wrong – by someone else claiming the otherwise and providing supporting evidence. Further, medical records are not a single blob of information, they are more like tiny databases, for which we can have various ACLs for various pieces of information – IBM did quite a lot of work in that direction, IIRC.
Nevertheless, blockchain is not the right tool, at least not for this domain.
But that depends on the definition what a medical record is, no? In socialist countries with universal healthcare, there is no such thing as claim that should be reimbursed or a plan with deductible. However, what is universal across the board is the state of body and mind, that is, all diagnoses and prescribed medications.
From this comment by @ubernostrum further up the chain:
The social side of the problem is even worse. For example: sometimes it is incredibly important that a patient be able to scrub data out of their medical history, because that data is wrong and will influence or even prejudice doctors who see the patient in the future.
This applies even without the baroque details of the US health insurance system. And even in countries with universal coverage, you still need to look out for fraud, fraudulent prescription of drugs, etc. The money comes from somewhere and it shouldn’t be wasted.
Here in Finland “universal” claims for things like medical pensions (whatever it’s called, disability retirement) are routinely denied. It’s tough, because people do try to abuse the shit out of it, but sometimes proper claims get denied. The processes for countering these claims are long and costly.
We also have systems within the same public health-care district that don’t talk to each other. The private franchises have handled that better, by asking for permission to share data, because it gives a better customer experience.
This is fortunately changing, but the data is now within a single point of failure, also duplicated in part for every relevant franchise.
Getting your data into the unified system incurs a cost. I don’t know if you can opt out of it, but you probably don’t want to, as the cost is not high, I think insurances cover it (transfer of wealth style) and it’s more convenient to check the records online than papers in a binder somewhere.
That is, for me, the key point. I have had a close relative get the wrong treatment for years because a doctor hastily put in an incorrect diagnosis and everyone after that just assumed it was correct.
Why did it take so long to have it edited out of her records? Because one symptom of that diagnosis is denying it. Once that diagnosis is in your records, whatever you say, the next doctor will just put in a note saying, “patient does not think she is suffering from X”.
So as far as I’m concerned, mutability of medical records is absolutely crucial. (Of course with a detailed log of operations visible only on court order or something.)
Blockchains are indeed append-only logs, albeit ones constructed in an interesting way.
And yet within a blockchain-based system state changes are made over time (Bitcoin balances change, CryptoKittes get new owners) by parsing the data contained within those logs.
In a medical system this means that records are indeed mutable/scrubbable. Want to fix a record? Post an update to the system’s blockchain. The record is the result of parsing the logs, so this updates the record. If you want a scrubbable log that’s also doable, although it does affect trust in the system in ways that take more thinking through than just “but GDPR!!!”.
All that said, like the OP I’m very wary of “control your data” pitches of all kinds. Don’t get me started on data-ownership UBI. ;-)
The analogy does work insofar as technical debt (depending on its location) requires you to make more hacks to work around it which can spread throughout the system and eventually become unsustainable. You can think of the time pressure as representing the increase in effort required to ship a feature.
The equivalent of a one block piece would be a software component with no interrelationships or dependencies. If you can build a useful product all out of components like that then good on you.
I’m currently in the middle of rewriting a bunch of Elixir FP-ish code from a “functional” approach to look more OOPy, because the maintenance burden is just beyond stupid.
Everybody talks about how magical functional programming in, but nobody seems to be really speaking up yet about functional programming as she is practiced:
“functional” programming means, programs with loads and loads of functions! yay!
“functional” programming with pattern matching, where if enough of something matches, hey, we can run it through the function! no need for types! (looking at you, Elixir maps)
“functional” programming where clever tricks with apply and map and reduce and filter and lambdas/closures substitute for clean, documented methods
“functional” programming where instead of fixing an interface and automatically getting to see where to clean up your objects, you chase through the entire pipeline of functions where the objects get shoved and look for where you’ll have to expand/contract things
“functional” programming where people just grab functions off of unrelated domain modules and use them because they need a one-off display function
“functional” programming where other people decide to duplicate the same function in multiple places because they never had to think about the domain
For all of its very many excesses, it’s so much easier to partition and expand on an OOP codebase than this “functional” nonsense. To wit:
“Young developer, go forth and create a new widget object, conforming to the widget interface. You can extend BaseWidget and just update the display method and price estimating function. Serialization is handled by that class already.”
Compare with:
“Young developer, we’re adding a new widget type. You’ll need to learn about the overloaded functions in the cost module, in the display module, and update all the tests accordingly. You’ll also need to either use the serialization library, or you’ll need to manually add those overrides to the code. Oh, also, make sure you don’t set the market_price in your arguments, or something else will match on that and you’ll need to refactor that too, or maybe it’ll just fail silently.”
OOP has so many problems, but I find it so much easier to get the size of a system and the page in and page out the parts I need into my working memory than to trace spaghetti for hours. I’m sick of people touting FP as a silver bullet when most of the shooters are cross-eyed.
I’ve long wondered when we’d figure out how to write FP so poorly we undo all of it’s supposed benefits. The same thing happened to OOP.
Unfortunately, there are very few computing paradigms out there for all the Medium thinkpieces that need to be written about how X paradigm is the worst.
Maybe programming isn’t the answer. Maybe we should just roll the clock back and have desks upon desks of people pushing paper around. It’d give more jobs for people that are not qualified to program, and reduce homelessness.
Of course, that’d make it harder to concentrate wealth in the hands of founders and execs and investors, but since I’m not seeing much of that, fuck’em.
Almost all of those bullet points are arguments in favor of static typing. Not arguments in favor of object oriented programming.
I’m speaking as someone who is terrible enough at programming to have written spaghetti code in both styles. It’s spaghetti code. Of course it sucks, and you don’t realize how much it sucks until several months later when you have to re-remember your terrible “design” choices, and, worse, have to extend it in a way that you did not plan for.
Static typing helps a great deal, but being able to define classes of objects and duplicate and tweak them is enormously useful–vulgar OOP isn’t totally wrong. Being able to communicate via messages, dispatch messages, inspect messages, store and replay messages–theoretical (Kay-esque) OOP also has a lot to offer.
I agree on the spaghetti code, but again will point how exasperating it is to see people celebrating FP as a cureall when most of the common devs seemingly can’t be trusted to scale it beyond a few tutorials. At least with OOP stuff you get a coloring book and if the devs are smart enough not to eat the crayons they can produce something that looks like the source material.
In other words, functional isn’t a programming paradigm. It isn’t a template for designing your system; it’s just a synonym for “computation without mutation and without backtracking*.” Which happens to contradict vulgar OOP, which has mutation, but that doesn’t change the fact that the proper counterpart to OOP would probably be something like Reactive Programming or Data-Oriented Design.
What has convinced you that the team that made your list of mistakes when attempting to follow the FP paradigm will do any better when attempting to follow the OOP paradigm?
Most programming problems are people problems. You’re only going to get truly good code if you commit to paying for it, up to and including formal code reviews wherein people can say “no, you can’t do that.”
Also: unityped functional programming is a special type of Hell. Learned that from my one (small) Clojure project and will never do it again.
This perfectly sums up the nagging feeling I’ve been having about FP, but haven’t been able to express. I have only ever heard people laude FP as a better paradigm, but I always felt that it would be so difficult to maintain the projects I see at scale in the real world and I see how difficult OOP is… I do not envy the hacks that are in place for large FP codebases. I can grok a OOP codebase (hell I can grok dissassembly), but the moment someone has given me FP code I spend more time trying to keep the entire thing in my head. To this day I don’t think I’ve been able to understand a single decently sized FP codebase.
There’s benefits and costs to both paradigms but the idea that either is “unusable” or “doesn’t scale” is genuinely laughable. The fact that you don’t use it, so you can’t figure out how to use it, isn’t actually any kind of argument for anything. Typed functional programming has been around since 1973, C was invented in 1972. People both less and more talented than either of us have used both OOP and FP, strongly and dynamically typed for decades at both small and gargantuan scales.
I said that entirely from my personal perspective and meant nothing of offense, your reaction and tone is exactly the reason that I tend to avoid even discussing my hesitation about programming paradigms. This could (and most likely is) be an entire failing on my end, but I have actively tried to learn and read large projects from both and friendlysocks experience mirrored mine. I wasn’t trying to slander, just merely mention that I struggle personally massively with the paradigm more than most everything in computer science. I can’t use it. I can’t understand how it scales to large teams.
In my experience it’s harder to teach an OOP programmer FP than a beginner. Part of this I believe has to do with how learning new paradigms is inherently humbling. The things that made you feel smart before now make you feel dumb. Dynamic typing for example makes me think, “Well I don’t know how anyone makes anything useful with this.”. In reality though the reason I feel this way is almost certainly because I’m missing pieces of knowledge of how to use languages like that effectively.
I think you could use it with practice. I think you could learn how to scale it to large teams. It’s okay that you don’t want to do or learn how to do either of those things but I wouldn’t internalize that decision as an inability. The people who are using functional programming at scale almost certainly aren’t smarter than you, just more patient and willing to feel dumb. Or more likely they didn’t happen to deep dive into an entire other paradigm where their previous intuitions are less useful.
While your post may have hurt my feelings to an extent and I’m sure that leaked out in my tone, my overarching goal was to dispel illusions of inadequacy.
I thought the conspiracy theory folks were wrong. It’s looking like they were right. Google is indeed doing some shady stuff but I still think the outrage is overblown. It’s a browser engine and Microsoft engineers have the skill set to fork it at any point down the line. In the short term the average user gets better compatibility which seems like a win overall even if the diversity proponents are a little upset.
I thought the conspiracy theory folks were wrong. It’s looking like they were right. Google is indeed doing some shady stuff
If it’s an organization, you should always look at their incentives to know whether they have a high likelihood of going bad. Google was a for-profit companies aiming for IPO. Their model was collecting info on people (aka surveillance company). These are all incentives for them to do shady stuff. Even if they want Don’t Be Evil, the owners typically loose a lot of control over whether they do that after they IPO. That’s because boards and shareholders that want numbers to go up are in control. After IPO’s, decent companies start becoming more evil most of the time since evil is required to always make specific numbers go up or down. Bad incentives.
It’s why I push public-benefit companies, non-profits, foundations, and coops here as the best structures to use for morally-focused businesses. There’s bad things that can still happen in these models. They just naturally push organizations’ actions in less-evil directions than publicly-traded, for-profit companies or VC companies trying to become them. I strongly advise against paying for or contributing to products of the latter unless protections are built-in for the users with regards to lock-in and their data. An example would be core product open-sourced with a patent grant.
Capitalism (or if you prefer, economics) isn’t a “conspiracy theory”. Neither is rudimentary business strategy. It’s amusing to me how many smart, competent, highly educated technical people fail so completely to understand these things, and come up with all kinds of fanciful stories to bridge the gap. Stories about the role and purpose of the W3C, for instance.
Having read all these hand-wringy threads about implementation diversity in the wake of this EdgeHTML move, I wonder how many would complain about, say, the lack of a competitor to the Linux kernel? There’s only one kernel, it’s financially supported by numerous mutually distrustful big businesses and used by nearly everybody, its arbitrary decisions about its API are de-facto hard standards… and yet I don’t hear much wailing and gnashing, even from the BSD folks. How is the linux kernel different than Chromium?
While I actually am concerned about a lack of diversity in server-side infrastructure, the Linux kernel benefits, as it were, from fragmentation.
There’s only one kernel
This simply isn’t true. There’s only one development effort to contribute to the kernel. There is, on the other hand, many branches of the kernel tuned to different needs. As somebody who spent his entire day at work today mixing and matching different kernel variants and kernel modules to finally get something to work, I’m painfully aware of the fragmentation.
There’s another big difference, though, and that’s in leadership. Chromium is run by Google. It’s open source, sure, but if you want your commits into Chromium, it’s gonna go through Google. The documentation for how to contribute is littered with Google-specific terminology, down to including the special internal “go” links that only Google employees can use.
Linux is run by a non-profit. Sure, they take money from big companies. And yes, money can certainly be a corrupting influence. But because Linux is developed in public, a great deal of that corruption can be called out before it escalates. There have been more than a few developer holy wars over perceived corruption in the Linux kernel, down to allowing it to be “tainted” with closed source drivers. The GPL and the underlying philosophy of free software helps prevent and manage those kinds of attacks against the organization. Also, Linux takes money from multiple companies, many of which are in competition with each other. It is in Linux’s best interest to not provide competitive leverage to any singular entity, and instead focus on being the best OS it can be.
If there is an internal memo at Google along the lines of “try to break the other web browsers’ perf as much as possible” that is not “rudimentary business strategy”, it’s “ground for anti-trust action”.
It’s as good of a strategy as helping the Malaysian PM launder money and getting a 10% cut (which… hey might still pay off)
Main difference is that there are many interoperable implementations of *nix/SUS/POSIX libc/syscall parts and glibc+Linux is only one. A very popular one, but certainly not the only. Software that runs on all (or most) *nix variants is incredibly common, and when something is gratuitously incompatible (by being glibc+Linux or MacOS only) you do hear the others complain.
Software that runs on all (or most) *nix variants is incredibly common
If by “runs on” you mean “can be ported to and recompiled without major effort”, then I agree, and you’re absolutely right to point out the other parts of the POSIX and libc ecosystem that makes this possible. But I can’t think of any software that’s binary compatible between different POSIX-ish OSs. I doubt that’s even possible.
On the other side of the analogy, in fairness, complex commerical web apps have long supported various incompatible quirks of multiple vendor’s browsers.
I am disgusted with the Linux monoculture (and the Linux kernel in general), even more so than with the Chrome monoculture. But that fight was fought a couple decades ago, it’s kinda late to be complaining about it. These complaints won’t be heard, and even if they are heard, nobody cares. The few who care are hardly enough to make a difference. Yes we have the BSDs (and I use one) and they’re in a minority position, kinda like Firefox…
How much of a monoculture is Linux, really? Every distro tweaks the kernel at least to some extent, there are a lot of patch sets for it in the open, and if you install a distro you get to choose your tools from the window manager onwards.
The corporatization of Linux is IMO problematic. Linus hasn’t sent that many angry emails percentually, but they make the headlines every time, so my conspiracy theory is that the corporations that paid big bucks for board seats on the Foundation bullied him to take his break.
We know that some kernel decisions have been made in the interest of corporations that employ maintainers, so this could be the tip of an iceberg.
Like the old Finnish saying “you sing his songs whose bread you eat”.
It’s a browser engine and Microsoft engineers have the skill set to fork it at any point down the line.
I think this is true. If Google screws us over with Chrome, we can switch to Firefox, Vivaldi, Opera, Brave etc and still have an acceptable computing experience.
The real concerns for technological freedom today are Google’s web application dominance and hardware dominance from Intel. It would be very difficult to get a usable phone or personal server or navigation software etc without the blessing of Google and Intel. This is where we need more alternatives and more open systems.
Right now if Google or Intel wants to, they can make your life really hard.
I don’t know. MIPS is open sourcing their hardware and there’s also RISC-V. I think the issue is that as programmers and engineers we don’t collectively have the willpower to make these big organizations behave because defecting is advantageous. Join the union and have moral superiority or be a mercenary and get showered with cash. Right now everyone chooses cash and as long as this is the case large corporations will continue to press their advantage.
“Join the union and have moral superiority or be a mercenary and get showered with cash. Right now everyone chooses cash and as long as this is the case large corporations will continue to press their advantage.”
Boom. You nailed it! I’ve been calling it out in threads on politics and business practices. Most of the time, people that say they’re about specific things will ignore them for money or try to rationalize how supporting it is good due to other benefits they can achieve within the corruption. Human nature. You’re also bringing in organizations representing developers to get better pay, benefits, and so on. Developers are ignoring doing that more than creatives in some other fields.
Yup. I’m not saying becoming organized will solve all problems. At the end of the day all I want is ethics and professional codes of conduct that have some teeth. But I think the game is rigged against this happening.
I don’t think RISC-V is ready for general purpose use. Some CPUs have been manufactured, but it would be difficult to buy a laptop or phone that carries one. I also think that manufacturing options are too limited. Acceptable CPUs can come from maybe Intel and TSMC and who knows what code/sub-sytems they insert into those.
This area needs to be more like LibreOffice vs Microsoft Office vs Google Docs vs others on Linux vs Windows vs MacOS vs others
We don’t really know yet how well server-side Blazor (Razor Components) will scale with heavy use applications.
In principle, people could take this approach in other languages as well. But I think Elixir / Erlang are uniquely positioned to do it well, as LiveView is built on Phoenix Channels, which (because they use lightweight BEAM processes) can easily scale to keep server-side state for every visitor on your site: https://phoenixframework.org/blog/the-road-to-2-million-websocket-connections
On the other hand, the Elixir community is very friendly. :)
Is that comment supposed to contrast the friendly Elixir community with the JS community? Is the JS community considered unfriendly? It’s way, way bigger than the Elixir community, so there are bound to be some/more unfriendly people. Maybe it’s so big that the concept of a “JS community” doesn’t even make sense. It’s probably more like “Typescript community”, “React community”, “Node community”, etc… But there are a lot of friendly people and helpful resources out there in JS-land, in my experience. I hope others have found the same thing.
The Elixir community is still in the “we’re small and must be as nice as possible to new people so they’ll drink the koolaid” phase. The “community” such as it is is also heavily pulled from job shops and the conference circuit, so there’s a big factor too.
Past the hype it’s a good and servicable language, provided you don’t end up on a legacy codebase.
From bitter experience, I’d say it would be an Elixir codebase, written in the past 4 or 5 years, spanning multiple major releases of Ecto and Phoenix and the core language, having survived multiple attempts at CI and deployment, as well as hosting platforms. Oh, and database drivers of varying quality as Ecto got up to speed. Oh oh, and a data model that grew “organically” (read: wasn’t designed) from both an early attempt at Ecto as well as being made to work with non-Ecto-supported DB backends, resulting it truly delightful idioms and code smells.
Oh, and because it is turning a profit, features are important and spending time doing things that might break the codebase are somewhat discouraged.
Elixir for green-field projects is absolutely a joy…brown-field Elixir lets devs just do really terrible heinous shit.
Elixir for green-field projects is absolutely a joy…brown-field Elixir lets devs just do really terrible heinous shit.
Totally agree, but I would say that significantly more heinous shit is available to devs in Ruby or another dynamic imperative language. The Elixir compiler is generally stricter and more helpful, and most code is just structured as a series of function calls rather than as an agglomeration of assorted stateful objects.
The refactoring fear is real though. IMO the only effective salve for that sickness is strong typing (and no, Dialyzer doesn’t count).
I mean, it’s really quite good in a number of ways, and the tooling is really good. That said, there’s nothing by construction that will keep people from doing really unfortunate things.
😊 I can see how it sounded that way, but I didn’t mean to imply anything about anyone else. The parent post said the Elixir community is small, so I was responding to that concern.
I’m not what you mean by “trying to polemic”, that doesn’t make sense to me as a phrase, but it was a genuine question about whether the JS community is considered to be unfriendly. I’d be happy to be told that such a question is off-topic for the thread, and I certainly don’t want to start a flame war, but I didn’t bring up the friendliness of the community. I’m sure the author didn’t mean harm, but I read (perhaps incorrectly) that part of their reply as part of an argument for using Elixir over JS to solve a problem.
What I meant to say was: “If this looks like it could be a good fit for thing you want to do, but you’re daunted by the idea of learning Elixir, don’t worry! We are friendly.”
I meant starting a controversy, sorry for my poor English!
I’m sorry if it felt harsh, that wasn’t what I tried to share. I really thought your goal was to start this flame war.
Every community has good and bad actors. Some people praise a lot some communities, but I don’t think they mean the others aren’t nice either.
The only thing that I could think of is that smaller communities have to be very careful with newcomers, because it helps to grow the community. JS people don’t need to be nice with each other, the community and the project are way pas that need. So I guess you would find a colder welcome than with a tiny community.
I’m not ignorant (well I am, but not about this): polemic is indeed an English word, but it’s not a verb. The phrase “trying to polemic” doesn’t make sense in English, it requires interpretation, which makes the meaning unclear. I can think of two interpretations for “trying to polemic” (there may be others) in the context of the comment:
My comment was polemic
I was attempting to start a polemical comment thread, aka a flame war. With the later clarification that seems like what the author was thinking.
The thing is that not everyone is at your level of English proficiency. You’re having a discussion here with people from around the world, you’ll need to make a couple of adjustments for expected quality of English and try to get the rough meaning of what they’re saying, otherwise you’ll be stuck pointing out grammatical errors all day.
I wasn’t really trying to point out an English error, and perhaps I did a poor job of that. I stand by the claim that it is an English error though.
I work with non-native English speakers all day, I’m aware of the need to try and understand other people and to make sure we’re on the same page. I’ll give a lot of slack to anyone, native or non-native, who’s trying to express themselves. The problem with the phrase “I feel you’re just trying to polemic on the subject’ is that at least some of the interpretations change the meaning. On the one hand, it could be saying that my comment was polemic, on the other it could be saying that my comment was trying to start a polemical thread. It’s not the same thing. And, for what it’s worth, if you’re going to throw an uncommon (and quite strong) English word like “polemic” out there it’s best if you correctly understand the usage. If the author had accused me of trolling, which is I think what they meant, that would have been both clearer and more accurate (though my intent was not to troll)
What surrprised me about Tainter’s analysis (and I haven’t read his entire book yet) is that he sees complexity as a method by which societies gain efficiency. This is very different from the way software developers talk about complexity (as ‘bloat’, ‘baggage’, ‘legacy’, ‘complication’), and made his perspective seem particularly fresh.
I don’t mean to sound dismissive – Tainter’s works are very well documented, and he makes a lot of valid points – but it’s worth keeping in mind that grand models of history have made for extremely attractive pop history books, but really poor explanations of historical phenomena. Tainter’s Collapse of Complex Societies, while obviously based on a completely different theory (and one with far less odious consequences in the real world) is based on the same kind of scientific thinking that brought us dialectical materialism.
His explanation of the fall of the evolution and the eventual fall of the Roman Empire makes a number of valid points about the Empire’s economy and about some of the economic interests behind the Empire’s expansion, no doubt. However, explaining even the expansion – let alone the fall! – of the Roman Empire strictly in terms of energy requirements is about as correct as explaining it in terms of class struggle.
Yes, some particular military expeditions were specifically motivated by the desire to get more grain or more cows. But many weren’t – in fact, some of the greatest Roman wars, like (some of) the Roman-Parthian wars, were not driven specifically by Roman desire to get more grains or cows. Furthermore, periods of rampant, unsustainably rising military and infrastructure upkeep costs were not associated only with expansionism, but also with mounting outside pressure (ironically, sometimes because the energy per capita on the other side of the Roman border really sucked, and the Huns made it worse on everyone). The increase of cost and decrease in efficiency, too, are not a matter of half-rational historical determinism – they had economic as well as cultural and social causes that rationalising things in terms of energy not only misses, but distorts to the point of uselessness. The breakup of the Empire was itself a very complex social, cultural and military story which is really not something that can be described simply in terms of the dissolution of a central authority.
That’s also where this mismatch between “bloat” and “features” originates. Describing program features simply in terms of complexity is a very reductionist model, which accounts only for the difficulty of writing and maintaining it, not for its usefulness, nor for the commercial environment in which it operates and the underlying market forces. Things are a lot more nuanced than “complexity = good at first, then bad”: critical features gradually become unneeded (see Xterm’s many emulation modes, for example), markets develop in different ways and company interests align with them differently (see Microsoft’s transition from selling operating systems and office programs to renting cloud servers) and so on.
Of course. I’m long past the age where I expect anyone to come up with a single, snappy explanation for hundreds of years of human history.
But all models are wrong, only some are useful. Especially in our practice, where we often feel overwhelmed by complexity despite everyone’s best efforts, I think it’s useful to have a theory about the origins and causes of complexity, even if only for emotional comfort.
Indeed! The issue I take with “grand models” like Tainter’s and the way they are applied in grand works like Collapse of Complex Societies is that they are ambitiously applied to long, grand processes across the globe without an exploration of the limits (and assumptions) of the model.
To draw an analogy with our field: IMHO the Collapse of… is a bit like taking Turing’s machine as a model and applying it to reason about modern computers, without noting the differences between modern computers and Turing machines. If you cling to it hard enough, you can hand-wave every observed performance bottleneck in terms of the inherent inefficiency of a computer reading instructions off a paper tape, even though what’s actually happening is cache misses and hard drives getting thrashed by swapping. We don’t fall into this fallacy because we understand the limits of Turing’s model – in fact, Turing himself explicitly mentioned many (most?) of them, even though he had very little prior art in terms of alternative implementations, and explicitly formulated his model to apply only to some specific aspects of computation.
Like many scholars at the intersections of economics and history in his generation, Tainter doesn’t explore the limits of his model too much. He came up with a model that explains society-level processes in terms of energy output per capita and upkeep cost and, without noting where these processes are indeed determined solely (or primarily) by energy output per capita and upkeep post, he proceeded to apply it to pretty much all of history. If you cling to this model hard enough you can obviously explain anything with it – the model is explicitly universal – even things that have nothing to do with energy output per capita or upkeep cost.
In this regard (and I’m parroting Walter Benjamin’s take on historical materialism here) these models are quasi-religious and are very much like a mechanical Turk. From the outside they look like history masterfully explaining things, but if you peek inside, you’ll find our good ol’ friend theology, staunchly applying dogma (in this case, the universal laws of complexity, energy output per capita and upkeep post) to any problem you throw its way.
Without an explicit understanding of their limits, even mathematical models in exact sciences are largely useless – in fact, a big part of early design work is figuring out what models apply. Descriptive models in humanistic disciplines are no exception. If you put your mind to it, you can probably explain every Cold War decision in terms of Vedic ethics or the I Ching, but that’s largely a testament to one’s creativity, not to their usefulness.
Not to mention all the periods of rampant rising military costs due to civil war. Those aren’t wars about getting more energy!
Sure. This is all about a framing of events that happened; it’s not predictive, as much as it is thought-provoking.
Thought-provoking, grand philosophy was certainly a part of philosophy but became especially popular (some argue that it was Nathaniel Bacon who really brought forth the idea of predicting progress) during the Industrial Era with the rise of what is known as the modernist movement. Modernist theories often differed but frequently shared a few characteristics such as grand narratives of history and progress, definite ideas of the self, a strong belief in progress, a belief that order was superior to chaos, and often structuralist philosophies. Modernism had a strong belief that everything could be measured, modeled, categorized, and predicted. It was an understandable byproduct of a society rigorously analyzing their surroundings for the first time.
Modernism flourished in a lot of fields in the late 19th early 20th century. This was the era that brought political philosophies like the Great Society in the US, the US New Deal, the eugenics movement, biological determinism, the League of Nations, and other grand social and political engineering ideas. It was embodied in the Newtonian physics of the day and was even used to explain social order in colonizing imperialist nation-states. Marx’s dialectical materialism and much of Hegel’s materialism was steeped in this modernist tradition.
In the late 20th century, modernism fell into a crisis. Theories of progress weren’t bearing fruit. Grand visions of the future, such as Marx’s dialectical materialism, diverged significantly from actual lived history and frequently resulted in a magnitude of horrors. This experience was repeated by eugenics, social determinism, and fascist movements. Planck and Einstein challenged the neat Newtonian order that had previously been conceived. Gödel’s Incompleteness Theorem showed us that there are statements we cannot evaluate the validity of. Moreover many social sciences that bought into modernist ideas like anthropology, history, and urban planning were having trouble making progress that agreed with the grand modernist ideas that guided their work. Science was running into walls as to what was measurable and what wasn’t. It was in this crisis that postmodernism was born, when philosophers began challenging everything from whether progress and order were actually good things to whether humans could ever come to mutual understanding at all.
Since then, philosophy has mostly abandoned the concept of modeling and left that to science. While grand, evocative theories are having a bit of a renaissance in the public right now, philosophers continue to be “stuck in the hole of postmodernism.” Philosophers have raised central questions about morality, truth, and knowledge that have to be answered before large, modernist philosophies gain hold again.
I don’t understand this, because my training has been to consider models (simplified ways of understanding the world) as only having any worth if they are predictive and testable i.e. allow us to predict how the whole works and what it does based on movements of the pieces.
You’re not thinking like a philosopher ;-)
Neither are you ;-)
https://plato.stanford.edu/entries/pseudo-science/ https://plato.stanford.edu/entries/popper/
Models with predictive values in history (among other similar fields of study, including, say, cultural anthropology) were very fashionable at one point. I’ve only mentioned dialectical materialism because it’s now practically universally recognized to have been not just a failure, but a really atrocious one, so it makes for a good insult, and it shares the same fallacy with energy economic models, so it’s a doubly good jab. But there was a time, as recent as the first half of the twentieth century, when people really thought they could discern “laws of history” and use them to predict the future to some degree.
Unfortunately, this has proven to be, at best, beyond the limits of human understanding and comprehension. This is especially difficult to do in the study of history, where sources are imperfect and have often been lost (case in point: there are countless books we know the Romans wrote because they’re mentioned or quoted by ancient authors, but we no longer have them). Our understanding of these things can change drastically with the discovery of new sources. The history of religion provides a good example, in the form of our understanding of Gnosticism, which was forever altered by the discovery of the Nag Hammadi library, to the point where many works published prior to this discovery and the dissemination of its text are barely of historical interest now.
That’s not to say that developing a theory of various historical phenomenons is useless, though. Even historical materialism, misguided as they were (especially in their more politicized formulations), were not without value. They forced an entire generation of historians to think more about things that they never really thought about before. It is certainly incorrect to explain everything in terms of class struggle, competition for resources and the means of production, and the steady march from primitive communism to the communist mode of production – but it is also true that competition for resources and the means of production were involved in some events and processes, and nobody gave much thought to that before the disciples of Marx and Engels.
This is true here as well (although I should add that, unlike most materialistic historians, Tainter is most certainly not an idiot, not a war criminal, and not high on anything – I think his works display an unhealthy attachment for historical determinism, but he most certainly doesn’t belong in the same gallery as Lenin and Mao). His model is reductionist to the point where you can readily apply much of the criticism of historical materialism to it as well (which is true of a lot of economic models if we’re being honest…). But it forced people to think of things in a new way. Energy economics is not something that you’re tempted to think about when considering pre-industrial societies, for example.
These models don’t really have predictive value and they probably can’t ever gain one. But they do have an exploratory value. They may not be able to tell you what will happen tomorrow, but they can help you think about what’s happening today in more ways than one, from more angles, and considering more factors, and possibly understand it better.
That’s something historians don’t do anymore. There was a period where people tried to predict the future development of history, and then the whole discipline gave up. It’s a bit like what we are witnessing in the Economics field: there are strong calls to stop attributing predictive value to macroeconomic models because after a certain scale, they are just over-fitting to existing patterns, and they fail miserably after a few years.
Well, history is not math, right? It’s a way of writing a story backed by a certain amount of evidence. You can use a historical model to make predictions, sure, but the act of prediction itself causes changes.
(OP here.) I totally agree, and this is something I didn’t explore in my essay. Tainter doesn’t see complexity as always a problem: at first, it brings benefits! That’s why people do it. But there are diminishing returns and maintenance costs that start to outstrip the marginal benefits.
Maybe one way this could apply to software: imagine I have a simple system, just a stateless input/output. I can add a caching layer in front, which could win a huge performance improvement. But now I have to think about cache invalidation, cache size, cache expiry, etc. Suddenly there are a lot more moving parts to understand and maintain in the future. And the next performance improvement will probably not be anywhere near as big, but it will require more work because you have to understand the existing system first.
I’m not sure it’s so different.
A time saving or critically important feature for me may be a “bloated” waste of bits for somebody else.
In Tainter’s view, a society of subsistence farmers, where everyone grows their own crops, makes their own tools, teaches their own children, etc. is not very complex. Add a blacksmith (division of labour) to that society, and you gain efficiency, but introduce complexity.
It might look like prolonged human misery throughout the world.
https://ourworldindata.org/uploads/2019/11/Extreme-Poverty-projection-by-the-World-Bank-to-2030-786x550.png
Bluntly: capitalism, growth, wealth, technology - these are the drivers of human progress and flourishing.
I agree except that the poverty reduction shown in East Asia is from socialist China.
China’s “socialism” revolves around state-owned enterprises in a market economy. They’re pretty capitalist, even Chinese schools teach the Chinese system of government as a Hegelian derivation of socialism and capitalism.
What distinguishes capitalism as a system is that profit is the decisive and ultimate factor around which economic activity is organized. China’s system makes use of markets and private enterprise, but it is ultimately planned and organized around social ends (see: the aforementioned poverty alleviation).
In China they describe their current system as the lower stage of socialism, but yes they’ve developed it in part based on insights into the contradictions of earlier socialist projects.
Another, less charitable, way of looking at it: the Chinese Government is unwilling to relinquish power, but discovered through the starvation and murder of 45 million of their own people that mixed economies are less bad than planned economies.
Yeah, I used to believe all that too. But eventually I got curious about what people on the other side of the argument could possibly have to say, and much to my surprise I found they had stronger arguments and a more serious commitment to truth. Then I realized that the people pushing those lines I believed were aligned with the people pushing all sorts of anti-human ideologies like degrowth.
“Government willing to relinquish power” is a sufficiently low-half-life, unstable-state-of-being that the average number in existence at any given time is zero. What information does referencing it add?
I disagree. Any government participating in open, free, elections is clearly willing to relinquish power.
In Australia, 78 senate seats and 151 house seats change occupants at an election; the remaining 140,000 government employees largely remain the same.
Is replacing 0.1% of the people really ‘replacing the government’?
Ah, fair - I was referring to the politicians (theoretically) in charge of the civil service. I’m intrigued by where you’re going with this, though … are you concerned about the efficacy of changing the 0.1% even in the case of democratically elected Governments?
To my mind, long-term stability is the key practical advantage of constitutional democracies as a form of government.
Dictatorships change less frequently, and churn far more of the government when they do. Single-party rule is subject to sudden, massive policy reversals.
Stability (knowing how the rules can change over time, and how they can’t) is what makes them desirable places for the wealthy to live and invest, which makes larger capital works possible.
Right so to paraphrase - you don’t see the replacement of politicians by democratic means as likely to effect significant change, but also, you see that as a feature not a bug?
Essentially, yes. Significant changes would imply that the voters have drastically changed their minds in a short time, which essentially never happens. The set of changes is also restricted (eg no retrospective crimes, restrictions on asset seizure).
Some starting criticism: https://issforum.org/essays/PDF/CR1.pdf
Encourage taking “our world in data” charts with a grain of salt when considering fossil fuel dependence (and our future), planetary boundaries framework losses (notably biodiversity), etc.
Hunter-gatherer societies also ran up against the limitations of their mode of relating to the environment. A paradigm shift in this relationship opened up new horizons for growth and development.
If we’ve reached similar environmental limits then the solution is a similar advancement to a higher mode, not “degrowth” (an ideology whose most severe ramifications will inevitably fall upon the people who are struggling the most already).
This is a book review. What does it do to suggest the data that the number of people in extreme poverty is decreasing are false?
Okay, go back to the linked chart, which references Poverty and Shared Prosperity, (World Bank, 2018) - here’s some reflection around the metrics - 12 Things We Can Agree On about Global Poverty (Hickel & Kenny, 2018) - notably the words of caution about such data:
Because you find value in it. The same reason people pay subscriptions to Netflix or their favorite YouTuber, or have subscriptions to Patreon’s of game modders or anyone else.
Where does this sentiment come from? I didn’t read anything about anyone owing anyone anything in the linked post.
Can you define “vanity project” here? It seems you are making a value judgment, the phrase implies that such projects have little value aside from stroking one’s ego. I wonder what has value, in your eyes.
Are you saying that because computer languages already exist, there is no value to having new languages?
Do humans already communicate perfectly with computers? Do computers perfectly meet humanity’s needs? Are computer programs free of bugs and vulnerabilities? Are all programs fast and efficient, user-friendly, and easy+quick to develop properly? Is there no room for improvements over existing languages that might help address these issues?
For elm specifically its designers seem to have very strong opinions on how to do things “right”, to the detriment of users (see e.g. https://dev.to/kspeakman/elm-019-broke-us--khn)
A major way to have a software project create a steady income flow is to get companies on board (they’re much less cost sensitive than individual users) but pulling the rug under their feet is a sure way to make sure that this won’t happen.
So for elm specifically, I think “vanity project” is an apt description.
Agreed, and “getting companies on board” doesn’t necessarily mean compromising design decisions like he describes. If people are willing to invest in your alternative language that means that they largely agree with your design principles and values. But it does mean providing the kinds of affordances and guarantees that allow an organization to be in control of their own destiny and engineer a robust and maintainable system. Elm has had almost no energy invested into these concerns.
I see nothing wrong with a project whose purpose is enjoyment, that includes some amount of stroking of ego.
Finding out which language features have the greatest amount of some desirable characteristic requires running experiments. I’m all for running experiments to see what is best (however best might be defined).
Creating a new language and claiming it has this, that or the other desirable characteristics, when there is no evidence to back up the claims, is proof by ego and bluster (this is a reply to skyfaller’s question, not a statement about the linked to post; there may be other posts that make claims about Elm).
How would a person establish any evidence regarding a new language without first designing and creating that new language? I agree that evidence for claims is desirable, but your original comment seems to declare all new language design to be vanity (i.e. only good for ego-stroking), and that’s a position that requires evidence as well. Just because a language has not yet proven its value does not mean it has no value. Reserving judgment until you can see some results seems a more prudent tactic than, well, prejudice.
First work out what language features are best, then design the language. There are plenty of existing languages to experiment with.
Design/implement language, get people to learn it, write code using it, and then run experiments is completely the wrong way of doing things.
How do you work out which features are best if the ones you’re trying don’t exist yet? Wouldn’t that require designing and implementing them and then let people use them?
To be able to design/implement a language feature that does not yet exist, somebody needs to review all existing languages to build a catalogue of existing features; or, consult such a catalogue if it already existed.
I don’t know of the existence of such a catalogue, pointers welcome.
Do you know of any language designer who did much more than using their existing knowledge of languages?
You wouldn’t have to know all existing language features to invent a new approach, and the only way to test a new approach would be to build it and let people use it.
I think I’m lost as to where your argument is headed.
Because they realise that there’s greater benefit in them having the other project with increased investment than in their own project. The invisible hand directs them to the most efficient use of resources.
Because they realise an absolute advantage the other project has in producing a useful outcome, and choose to benefit from that advantage.
Because they are altruists who see someone doing something interesting and decide to chip in.
Because they aren’t vain.
My solution:
Nice trick with that
Enum.sort_by/2
. I used tuple in my solution to not need to handleitem, nil
as a separate case. I thought about using pattern match in the closure, but I thought that separate helper function will be clearer.Cory Doctorow:
What’s the connection to Rust?
Jess and Bryan appear be be hedging their startup on Rust.
*edit: they talk about rust throughout the episode.
I wonder if he has decided that writing a better low level programming language might be a more significant undertaking than he thought, especially if he hopes to primarily program video games…
I don’t think so. You can find videos of him demonstrating the language in great detail on his YouTube page: https://www.youtube.com/user/jblow888/playlists
Who knows. In the episode, Jonathan Blow doesn’t appear to indicate that he sees Rust solving his specific problems. I appreciated all the lamentations and the insights and the ranting. After 20 years I feel every pain point they ranted on.
First mention I noted was somewhere around 1:31:14 (h:m:s) into the podcast or so.
I agree with a lot of the points he makes, but testing is the fly in the ointment. It’s much harder to test a 200 line function as compared to a couple smaller functions.
I use this style all the time for batch processing glue code that’s not easy to unit test anyway. It makes sense to limit the scope of variables as much as possible. I regularly promote variables to higher levels of scope than what I initially predicted when they’re heavily used. It’s cleaner, and easier to refactor than threading unrelated state values in and out of multiple functions with awkwardly constructed structs or tuples.
He’s not talking about pure functions, where a granular separation of functionality improves testability, but rather cases where the program coordinates many stateful calls. Unit tests of functions split out from that kind of procedure don’t actually tell you much about the correctness of the program and generally become dead-weight change-detectors.
I agree that change-detector tests are worthless. I guess if there are no pieces that can be separated out as pure functions then yes, inlining makes a lot more sense.
This is conflating a few things.
First of all, I would argue that you can’t do engineering without understanding your company’s business. A software engineer has to balance lots of different factors when building a system, but the one factor that cannot be compromised is the amount of time and/or money that your organization can afford to spend on a given system in order to be sustainable. I agree this understanding is important, but it has very little to do with marketing.
Secondly, there is a kind of marketing which is just finding a way to inform potential customers about your product and explaining how it could help them. I think you’d be hard-pressed to find anyone who thinks this is evil. Then there is a whole other class of activity also called marketing which is varying degrees of manipulative, dishonest, and ineffectual make-work (see: most of the ad-tech industry). I think you’d be hard-pressed to argue that these activities aren’t evil without resorting to nihilism.
It has a grandiose claim and and tries to attach itself to a well-respected coding standard, but it smells like a post-hoc justification for the unpalatable state of the code.
The code looks like a state machine. And a state machine can be written either as a spaghetti code of
if
s, with omg-space-shuttle-will-crash-if-you-forget-an-else
fear, or as a table with state transitions, which by construction ensures everything is accounted for.I’ve the feeling that most developers (with a CS degree) forgot about this. And on the other side, there’s also a big component of lack of education on the topic. I’m not sure how many common programmers have been instructed or invested time learning about state machines.
That said, it is definitely a good mentoring/training topic. I think it will be well received by my team, and in any case should start circulating the knowledge more. Does anyone have good resources on this?
I read this and thought, well, could you unit test this code to ensure correctness? I know unit testing threading behavior is tough, but if this is space shuttle levels of risk, might that effort be worth it?
Perhaps. Perhaps you just crashed the ship.
Nope. Not buying it. This is cheesy schtick covering up some very questionable coding practices.
I am a late stage beginning programmer struggling towards journeyman, and even I must ask “Why not AT LEAST use methods to collapse some of these 10 level deep conditional nests?”.
Good software engineering practice strives to keep code easy to reason about and thus more readable and maintainable. As much as we all love to be entertained by seeing HERE BE DRAGONS in source code, nobody actually thinks this is a GOOD idea.
This is an invitation to deviation of normalcy, and I can’t see any good at all coming out of it.
I think the received wisdom about small functions and methods has gotten somewhat muddled. The small functions style has become an aesthetic preference (which I adopted and still observe in myself) that is applied arbitrarily without any objective understanding of its effects.
For things that are actually functions in the mathematical sense (i.e., pure functions) a granular separation of functionality simplifies testing and composition. But procedures that mutate state or coordinate multiple stateful systems are not testable and composable in the same way. In this context, the small functions/methods style is actually an obstacle to understanding the system and ensuring correctness.
See: http://number-none.com/blow/john_carmack_on_inlined_code.html
Again, I am but an acolyte, but from my super simplistic perspective, having 8 levels of conditional nesting makes code SUPER hard to reason about, and when you break that into methods that signal the intent of the code contained within you increase readability.
I guess I’d thought of that as beyond argument. I’ll read the Carmack article, thanks for the link.
Yeah that’s definitely the accepted dogma but I’ve observed the opposite in large systems I’ve worked on (although it took a while for me to see it). If you look at game engines, which do some of the most complex coordination of stateful systems anywhere, you will see the large procedure/nested conditional style. This doesn’t come from an ignorance of the ability to make small methods.
The intent communicated by factoring code into small methods is that these calls can be rearranged and reused at will, but for stateful calls this most often isn’t true.
I can also imagine that in game engines simply eating the overhead induced by a method call (stack, heap, etc.) could be problematic.
Lesson for me here is that there are almost no hard and fast rules where code is concerned, but I still think that for the class of very non computationally intensive process control (Devops) work I do, having small, readable, clearly named methods rather than giant nesting structures is a best practice I’ll stand by.
I think it’s a mistake to think of it in terms of performance optimization. From the above article:
Also super interesting how people throw around down votes like candy.
How can my failure to buy into the argument being purveyed by the author possible be incorrect ?
haha hahahaha oh oh oh yeah definitely this is Tesla Motors we’re talking yeah
I don’t think that thread says anything about the expertise of the team that would have to implement multithreaded code, or anything about the overall level of development expertise at Tesla, really. If you’ve worked in software for a while, you should have plenty of stories like that yourself. (If you don’t, I contend you’ve been unusually lucky with your choice of employers.)
I really don’t like this idea. Think of all the edge cases. shudder
I get the concern, but there are telephone systems which have over 20 years continuous uptime using Erlang hot code updates without a visible outage.
There is a somewhat qualitative difference between a phone switch crashing and a car suddenly unable to steer or brake going through a schoolzone and crashing.
I don’t mind reaping child processes in my programs, but I’d prefer my sedan not duplicate my behavior.
Sure - avoiding crashes in the phone switch isn’t even that important, and they still did it.
If anything, that’s a stronger argument that it’s hard to get wrong.
What exactly are you planning to write in Erlang?
If you mean the part where the computer runs a bunch of nasty heuristics to convert camera pictures and radar scans into second-by-second actions, don’t systems like TensorFlow normally use SIMD or the GPU for parallelism rather than threads, to avoid the overhead of cache coherency and context switching? When your tolerance for latency is that low, you do not use Erlang.
If you mean the part where you use map and traffic data to do your overall route, I don’t think you need to be that fast. You’re spending most of your time waiting on the database and network, and could probably use Erlang just fine. The important part is the fast self-driving heuristics system cannot block on the slower mapping system. The driving system needs to send out a request for data, and keep driving even if the mapping system doesn’t respond right away.
I was being facetious, really. You wouldn’t run BEAM on the AP computer; it’s not meant for that kind of data crunching.
It is my understanding that the MCU is the middleman between the AP computer and the car’s hardware – this is how it also applies firmware updates to various parts of the car.
So I would write the AP control plane in Erlang/Elixir for extremely reliable low latency message passing. I expect the MCU is receiving values from the AP hardware which it acts upon by sending the signals to the car’s hardware. This also means it’s extremely unlikely to crash the process from bad data coming from the AP computer.
This is a guess based on what I’ve seen inside the MCU, but haven’t bothered digging too deep.
I’m also confused about why you think Erlang is not low latency?
The language that’s designed for safe multithreading and high performance is Rust. BEAM languages wouldn’t provide acceptable performance for this use-case.
Rust has no formal spec and should stay far away from systems that control the life or death of humans for now.
The languages used in successful projects in safety-critical field had no formal spec. That’s mostly C and assembly with some Ada, C++, and Java. So, Rust would probably be an improvement unless it was a group throwing every verification tool they can at their C code. It has the most such tools.
To be fair Ada has a pretty decent specification and SPARK/Ada probably has the most usable verification tools for industrial usage today, as long as you want specifications that are more expressive than what your type-system can capture. The Rust system may be very good at catching ownership-related mistakes, but there still currently exists no automated tools to verify that, say, a function that claims to be sorting data actually returns a sorted result.
You’re right in that Ada/SPARK can get further in correctness. Most in safety-critical systems use C subsets with no formal methods, though. There’s lots of review, lots of testing, and recently more use of automated analyzers.
Even so, Ada still has Rust beat on that given there’s more tooling for analyzing and testing it. C has even more than Ada.
We should strive to make better architectural decisions though.
Does Elixir have a formal spec? :)
No, but I wonder how this works in relation to JVM / BEAM. Is the formal spec really about the specific language or is the behavior of the VM sufficient? I’m not aware of different JVM or BEAM languages being able to do things that are impossible in Java/Erlang.
Need more info, but it’s interesting to think about.
Countless other languages are designed for high performance and sufficient safety.
Summary: author’s expectations of a young language exceed the actual implementation, so they write a Medium article.
If you can’t tell: slightly triggering article for me, and I don’t use/advocate for Elm. I’d much prefer if the author either pitched in and helped, or shrugged and moved on to something else. Somehow, yelling into the void about it is worse to me, I think because there are one or two good points in there sandwiched between non-constructive criticisms.
The article provides valuable information for people considering using Elm in production. The official line on whether Elm is production ready or not is not at all clear, and a lot of people suggest using it.
I didn’t like that he makes unrelated and unsupported claims in the conclusion (“Elm is not the fastest or safest option”). That’s not helpful.
I read “fastest” and “safest” as referring to “how fast can I can get work done” and “is this language a safe bet”, not fast and safe in the sense of performance. If that’s the right interpretation, then those conclusions flow naturally from the observations he makes in the article.
Right, the author made the same clarification to me on Twitter, so that’s definitely what he meant. In that sense, the conclusion is fine. Those are very ambiguous words though (I took them to mean “fastest runtime performance” and “least amount of runtime errors”).
Yeah definitely. I also was confused initially.
TBF, I was a little too snarky in my take. I don’t want to shutdown legitimate criticism.
That ambiguity is a problem. There’s also a chicken/egg problem with regard to marketing when discussing whether something is production ready. I’m not sure what the answer is.
It’s even more ambiguous for Elm. There are dozens of 100K+ line commercial code bases out there. How many should there be before the language is “production ready”? Clearly, for all those companies, it already is.
Perhaps the question is misguided and has reached “no true Scotsman” territory.
That’s one reason why this topic is touchy to me: things are never ready until the Medium-esque blogosphere spontaneously decides it is ready, and then, without a single ounce of discontinuity, everyone pretends like they’ve always loved Elm, and they’re excited to pitch in and put forth the blood, sweat, and tears necessary to make a healthy, growing ecosystem. Social coding, indeed.
In a sense, everyone wants to bet on a winner, be early, and still bet with the crowd. You can’t have all those things.
I like your last paragraph. When I think about it, I try to reach the same impossible balance when choosing technologies.
I even wrote a similar post about Cordova once (“is it good? is it bad?”). Hopefully it was a bit more considered as I’d used it for 4 years before posting.
The thing that bothers me with the developer crowd is somewhat different, I think. It’s the attempt to mix the other two unmixable things. On one hand, there’s the consumerist attitude to choosing technologies (“Does it work for me right now? Is it better, faster, cheaper than the other options?”). On the other hand, there are demands for all the benefits of open source like total transparency, merging your PR, and getting your favourite features implemented. Would anyone demand this of proprietary software vendors?
I’m not even on the core Elm team, I’m only involved in popularising Elm and expanding the ecosystem a bit, but even for me this attitude is starting to get a bit annoying. I imagine it’s worse for the core team.
Hey, thanks for your work on Elm. I’m much less involved than you, but even I find the “walled garden” complaints a little irritating. I mean, if you don’t like this walled garden, there are plenty of haphazard dumping grounds out there to play in, and even more barren desert. Nobody’s forcing anybody to use Elm! For what it’s worth, I think Evan and the Elm core team is doing great work. I’m looking forward to Elm 1.0, and I hope they take their time and really nail it.
The author of this article isn’t pretending to be an authority on readiness, and claiming that they’ll bandwagon is unwarranted. This article is from someone who was burned by Elm and is sharing their pain in the hopes that other people don’t get in over their heads.
Being tribal, vilifying the “Medium-esque blogosphere” for acts that the author didn’t even commit, and undermining their legitimate criticisms with “well, some people sure do love to complain!” is harmful.
I’d like to push back on this. What is “production ready”, exactly? Like I said in another comment, there are dozens of 100K+ line commercial Elm code bases out there. Clearly, for all those companies, it already is.
I’ve used a lot of other technologies in production which could easily be considered “not production ready”: CoffeeScript, Cordova, jQuery Mobile, Mapbox. The list goes on. They all had shortcomings, and sometimes I even had to make compromises in terms of requirements because I just couldn’t make particular things work.
The point is, it either works in your particular situation, or it doesn’t. The question is meaningless.
Here are my somewhat disjoint thoughts on the topic before the coffee has had a chance to kick in.
At a minimum, the language shouldn’t make major changes between releases that require libraries and codebases to be reworked. If it’s not at a point where it can guarantee such a thing, then it should state that fact up front. Instead, its creator and its community heavily promote it as being the best thing since sliced bread (“a delightful language for reliable webapps”) without any mention of the problems described in this post. New folks take this to be true and start investing time into the language, often quite a lot of time since the time span between releases is so large. By the time a new release comes out and changes major parts of the language, some of those people will have invested so much time and effort into the language that the notion of upgrading (100K+ line codebases, as you put it) becomes downright depressing. Not to mention that most of those large codebases will have dependencies that themselves will need upgrading or, in some cases, will be have to be deprecated (as
elm-community
has done for most of my libraries with the release of 0.19, for example).By promoting the language without mentioning how unstable it really is, I think you are all doing it a disservice. Something that should be perceived as good, like a new release that improves the language, ends up being perceived as a bad thing by a large number of the community and so they leave with a bad taste in their mouth – OP made a blog post about it, but I would bet the vast majority of people just leave silently. You rarely see this effect in communities surrounding other young programming languages and I would posit that it’s exactly because of how they market themselves compared to Elm.
Of course, in some cases it can’t be helped. Some folks are incentivized to keep promoting the language. For instance, you have written a book titled “Practical Elm” so you are incentivized to promote the language as such. The more new people who are interested in the language, the more potential buyers you have or the more famous you become. I believe your motivation for writing that book was pure and no one’s going to get rich off of a book on Elm. But, my point is that you are more bought into the language that others normally are.
That is the very definition of not-production-ready, isn’t it?
Disclaimer: I quit Elm around the release of 0.18 (or was it 0.17??) due to a distaste for Evan’s leadership style. I wrote a lot of Elm code (1 2 3 4 and others) and put some of it in production. The latter was a mistake and I regret having put that burden on my team at the time.
From what I’ve seen, many people reported good experiences with upgrading to Elm 0.19. Elm goes further than many languages by automating some of the upgrades with
elm-upgrade
.FWIW, I would also prefer more transparency about Elm development. I had to scramble to update my book when Elm 0.19 came out. However, not for a second I’m going to believe that I’m entitled to transparency, or that it was somehow promised to me.
To your other point about marketing, if people are making decisions about putting Elm into production based on its tagline, well… that’s just bizarre. For example, I remember looking at React Native in its early stages, and I don’t recall any extensive disclaimers about its capabilities or lack thereof. It was my responsibility to do that research - again, because limitations for one project are a complete non-issue for another project. There’s just no one-size-fits-all.
Finally, calling Elm “unstable” is simply baseless and just as unhelpful as the misleading marketing you allege. I get that you’re upset by how things turned out, but can’t we all have a discussion without exaggerated rhetoric?
Exactly my point: there is no such definition. All those technologies I mentioned were widely used at the time. I put them into production too, and it was a good choice despite the limitations.
And that’s great! The issue is the things that cannot be upgraded. Let’s take elm-combine (or parser-combinators as it was renamed to), for example. If you depended on the library in 0.18, then, barring the invention of AGI, there’s no automated tool that can help you upgrade because your code will have to be rewritten to use a different library because elm-combine cannot be ported to 0.19 (not strictly true, because it can be ported but only by the core team, but my point still stands because it won’t be). Language churn causes ecosystem churn which, in turn, causes pain for application developers so I don’t think it’s a surprise that folks get angry and leave the community when this happens given that they may not have had any prior warning before they invested their time and effort.
I don’t think it’s an exaggeration to call a language with breaking changes between releases unstable. To be completely honest, I can’t think of a better word to use in this case. Fluctuating? In flux? Under development? Subject to change? All of those fit and are basically synonymous to “unstable”. None of them are highlighted anywhere the language markets itself, nor by its proponents. I’m not making a judgement on the quality of the language when I say this. I’m making a judgement on how likely it is to be a good choice in a production environment, which brings me to…
They were not good choices, because, by your own admission, you were unable to meet your requirements by using them. Hence, they were not production-ready. Had you been able to meet your requirements and then been forced to make changes to keep up with them, then that would also mean they were not production-ready. From this we have a pretty good definition: production-readiness is inversely proportional to the likelihood that you will “have a bad time” after putting the thing into production. The more that likelihood approaches 0, the more production-ready a thing is. Being forced to spend time to keep up with changes to the language and its ecosystem is “having a bad time” in my book.
I understand that our line of work essentially entails us constantly fighting entropy and that, as things progress, it becomes harder and harder for them maintain backwards-compatibility but that doesn’t mean that nothing means anything anymore or that we can’t reason about the likelihood that something is going to bite us in the butt later on. From a business perspective, the more likely something is to change after you use it, the larger risk it poses. The more risks you take on, the more likely you are to fail.
I think your definition is totally unworkable. You’re claiming that technologies used in thousands upon thousands of projects were not production ready. Good luck with finding anything production ready then!
I’ve been working with Clojure for almost a decade now, and I’ve never had to rewrite a line of my code in production when upgrading to newer versions because Cognitect takes backwards compatibility seriously. I worked with Java for about a decade before that, and it’s exact same story. There are plenty of languages that provide a stable foundation that’s not going to keep changing from under you.
I am stating that being able to put something in production is different from said thing being production ready. You claim that there is no such thing as “production ready” because you can deploy anything which is a reduction to absurdity of the situation. Putting something into production and being successful with it does not necessarily make it production ready. It’s how repeatable that success is that does.
It doesn’t look like we’re going to get anywhere past this point so I’m going to leave it at that. Thank you for engaging and discussing this with me!
Thank you as well. As I said in another comment, this is the first time I tried having an extended discussion in the comments in here, and it hasn’t been very useful. Somehow we all end up talking past each other. It’s unfortunate. In a weird way, maybe it’s because we can’t interrupt each other mid-sentence and go “Hang on, but what about?…”. I don’t know.
This doesn’t respond to bogdan’s definition in good faith.
In response to your criticisms, bogdan proposed a scale of production-readiness. This means that there is no such distinction between “production-ready” and not “production-ready”. Elm is lower on this scale than most advocates imply, and the article in question provides supporting evidence for elm being fairly low on this scale.
What kind of discussion do you expect to have when the first thing you say to me is that I’m responding in bad faith? Way to go, my friend.
Frankly, I don’t really want to have a discussion with you. I’m calling you out because you were responding in bad faith. You didn’t address any of his actual points, and you dismissed his argument condescendingly. The one point you did address is one that wasn’t made, and wasn’t even consistent with bogdan’s stance.
In my experience, the crusader for truth and justice is one of the worst types of participants in a forum.
We may not have agreed, but bogdan departed from the discussion without histrionics, and we thanked each other.
But you still feel you have to defend his honour? Or are you trying to prove that I defiled the Truth? A little disproportionate, don’t you think?
(Also: don’t assign tone to three-sentence comments.)
I disagree that the question is meaningless just because it has a subjective aspect to it. A technology stack is a long term investment, and it’s important to have an idea how volatile it’s going to be. For example, changes like the removal the of the ability to do interop with Js even in your own projects clearly came as a surprise to a lot of users. To me a language being production ready means that it’s at the point where things have mostly settled down, and there won’t be frequent breaking changes going forward.
By this definition, Python wasn’t production ready long after the release of Python 3. What is “frequent” for breaking changes? For some people it’s 3 months, for others it’s 10 years. It’s not a practical criterion.
Even more interestingly, Elm has been a lot less volatile than most projects, so it’s production ready by your definition. Most people complain that it’s changing too slowly!
(Also, many people have a different perspective about the interop issue; it wasn’t a surprise. I don’t want to rehash all that though.)
Python 3 was indeed not production-ready by many people’s standards (including mine and the core team’s based on the changes made around 3.2 and 3.3) after its release up until about version 3.4.
“it’s improving too slowly” is not the same as “it’s changing too slowly”.
Sorry, this doesn’t make any sense.
By @Yogthos’s definition, neither Python 2 nor Python 3 were “production ready”. But if we’re going to write off a hugely popular language like that, we might as well write off the whole tech industry (granted, on many days that’s exactly how I feel).
Re Elm: again, by @Yogthos’s definition it’s perfectly production ready because it doesn’t make “frequent breaking changes”.
Python 2 and 3 became different languages at the split as evidenced by the fact that they were developed in parallel. Python 2 was production ready. Python 3 was not. The fact that we’re using numbers to qualify which language we’re talking about proves my point.
It took five years for Django to get ported to Python 3. (1 2)
You’re hanging on the wording here and “frequent” is not as important to Yogthos’ argument as “breaking changes” is.
I don’t think we’re going to get anywhere with this discussion by shifting goalposts.
I think most people agree that Python 3 was quite problematic. Your whole argument seems to be that just because other languages have problems, you should just accept random breaking changes as a fact of life. I strongly disagree with that.
The changes around ecosystem access are a HUGE breaking change. Basically any company that invested in Elm and was doing Js interop is now in a really bad position. They either have to stay on 0.18, re-implement everything they’re using in Elm, or move to a different stack.
Again, as I noted there is subjectivity involved here. My standards for what constitutes something being production ready are different than yours apparently. That’s fine, but the information the article provides is precisely what I’d want to know about when making a decision of whether I’d want to invest into a particular piece of technology or not.
I don’t think you are really aware of the changes to Elm because you’re seriously overstating how bad they were (“re- implement everything” was never the case).
I agree that there is useful information in the article – in fact, I try to read critical articles first and foremost when choosing technologies so it’s useful to have them. I never said that we should accept “random breaking changes” either (and it isn’t fair to apply that to Elm).
I still don’t see that you have a working definition of “production ready” – your definition seems to consist of a set with a single occupant (Clojure).
As an aside, this is the first time I’ve had an extended discussion in the comments here on Lobsters, and it hasn’t been very useful. These things somehow always end up looking like everyone’s defending their entrenched position. I don’t even have an entrenched position – and I suspect you may not either. Yet here we are.
Perhaps I misunderstand the situation here. If a company has an Elm project in production that uses Js interop, what is the upgrade path to 0.19. Would you not have to rewrite any libraries from the NPM ecosystem in Elm?
I worked with Java for around a decade before Clojure, and it’s always been rock solid. The biggest change that’s happened was the introduction of modules in Java 9. I think that’s a pretty good track record. Erlang is another great example of a stack that’s rock solid, and I can name plenty of others. Frankly, it really surprises me how cavalier some developer communities regarding breaking changes and regressions.
Forum discussions are always tricky because we tend to use the same words, but we assign different meanings to them in our heads. A lot of the discussion tends to be around figuring out what each person understands when they say something.
In this case it sounds like we have different expectations for what to expect from production ready technology. I’m used to working with technologies where regressions are rare, and this necessarily colors my expectations. My views on technology adoption are likely more conservative than majority of developers.
Prior to the 0.19 release, there was a way to directly call JS functions from Elm by relying on a purely internal mechanism. Naturally, some people started doing this, despite repeated warnings that they really shouldn’t. It wasn’t widespread, to my knowledge.
All the way in 2017, a full 17 months before 0.19 release, it was announced that this mechanism would be removed. It was announced again 5 months before the release.
Of course, a few people got upset and, instead of finding a migration path, complained everywhere they could. I think one guy wrote a whole UI framework based on the hack, so predictably he stomped out of the community.
There is an actual JS interop mechanism in Elm called ports. Anybody who used this in 0.18 (as they should have) could continue using it unchanged in 0.19. You can use ports to integrate the vast majority of JS libraries with Elm. There is no need to rewrite all JavaScript in Elm. However, ports are asynchronous and require marshalling data, which is why some people chose to use the internal shortcut (aka hack) instead.
So, if a company was using ports to interop with JS, there would be no change with 0.19. If it was using the hack, it would have to rewrite that portion of the code to use ports, or custom elements or whatever – but the rework would be limited to bindings, not whole JS libraries.
There were a few other breaking changes, like removing custom operators. However, Elm has a tool called elm-upgrade which helps to identify these and automatically update code where possible.
There were also fairly significant changes to the standard library, but I don’t think they were any more onerous than some of the Rails releases, for example.
Here are the full details, including links to previous warnings not to use this mechanism, if you’re interested: https://discourse.elm-lang.org/t/native-code-in-0-19/826
I hope this clarifies things for you.
Now, regarding your “rock solid” examples by which I think you mean no breaking changes. If it’s achievable, that’s good – I’m all for it. However, as a counterexample, I’ll bring up C++ which tied itself into knots by never breaking backward compatibility. It’s a mess.
I place less value on backward compatibility than you do. I generally think that backward compatibility ultimately brings software projects down. Therefore, de-prioritising it is a safer bet for ensuring the longevity of the technology.
Is it possible that there are technologies which start out on such a solid foundation that they don’t get bogged down? Perhaps – you bring up Clojure and Erlang. I think Elm’s core team is also trying to find that kind of foundation.
But whether Elm is still building up towards maturity or its core team simply has a different philosophy regarding backward compatibility, I think it’s at least very clear that that’s how it is if you spend any time researching it. So my view is that anybody who complains about it now has failed to do their research before putting it into production.
I feel like you’re glossing over the changes from native modules to using ports. For example, native modules allowed exposing external functions as Tasks allowing them to be composed. Creating Tasks also allows for making synchronous calls that return a Task Never a which is obviously useful.
On the other hand, ports can’t be composed like Tasks, and as you note can’t be used to call synchronous code which is quite the limitation in my opinion. If you’re working with a math library then having to convert the API to async pub/sub calls is just a mess even if it is technically possible to do.
To sum up, people weren’t just using native modules because they were just completely irresponsible and looking to shoot themselves in a foot as you seem to be implying. Being able to easily leverage existing ecosystem obviously saves development time, so it’s not exactly surprising that people started using native modules. Once you have a big project in production it’s not trivial to go and rewrite all your interop in 5 months because you have actual business requirements to work on. I’ve certainly never been in a situation where I could just stop all development and go refactor my code as long as I wanted.
This is precisely the kind of thing I mean when I talk about languages being production ready. How much time can I expect to be spending chasing changes in the language as opposed to solving business problems. The more breaking changes there are the bigger the cost to the business is.
I’m also really struggling to follow your argument regarding things like Rails or C++ to be honest. I don’t see these as justifying unreliable tools, but rather as examples of languages with high maintenance overhead. These are technologies that I would not personally work with.
I strongly disagree with the notion that backwards compatibility is something that is not desirable in tooling that’s meant to be used in production, and I’ve certainly never seen it bring any software projects down. I have however seen plenty of projects being brought down by brittle tooling and regressions.
I view such tools as being high risk because you end up spending time chasing changes in the tooling as opposed to solving business problems. I think that there needs to be a very strong justification for using these kinds of tools over ones that are stable.
I think we’re talking past each other again, so I’m going to wrap this up. Thank you for the discussion.
The question isn’t even close to meaningless… Classifying something as “production ready” means that it is either stable enough to rely on, or is easily swapped out in the event of breakage or deprecation. The article does a good enough job of covering aspects of elm that preclude it from satisfying those conditions, and it rightly warns people who may have been swept up by the hype around elm.
Elm has poor Interop, and is (intentionally) a distinct ecosystem from JS. This means that if Elm removes features you use, you’re screwed. So, for a technology like Elm (which is a replacement of JS rather than an enhancement) to be “production ready” it has to have a very high degree of stability, or at least long term support for deprecated features. Elm clearly doesn’t have this, which is fine, but early adopters should be warned of the risks and drawbacks in great detail.
Let’s keep it really simple, to me ‘production-ready’ is when the project version gets bumped to 1.0+. This is a pretty established norm in the software industry and usually a pretty good rule of thumb to judge by. In fact Elm packages enforce semantic versioning, so if you extrapolate that to Elm itself you inevitably come to the conclusion that hasn’t reached production-release readiness yet.
The term “production ready” is itself not at all clear. Some Elm projects are doing just fine in production and have been for years now. Some others flounder or fail. Like many things, it’s a good fit for some devs and some projects, and not for some others – sometimes for reasons that have little to do with the language or its ecosystem per se. In my (quite enjoyable!) experience with Elm, both official and unofficial marketing/docs/advocates have been pretty clear on that; but developers who can’t or won’t perceive nuance and make their own assessments for their own needs are likely to be frustrated, and not just with Elm.
I agree that there’s valuable information in this article. I just wish it was a bit less FUDdy and more had more technical detail.
I think there’s an angle to Elm’s marketing that justifies these kinds of responses: Those “author’s expectations” are very much encouraged by the way the Elm team presents their language.
Which criticisms do you find unfair, which are the good points?
I’m sympathetic to both Elm and the author here. I understand Elm’s marketing stance because they ask devs to give up freely mixing pure/impure code everywhere in their codebase on top of a new language and ecosystem. (In general, OSS’s perceived need for marketing is pretty out of hand at this point and a bit antithetical to what attracts me to it in the first place). OTOH it shouldn’t be possible to cause a runtime error in the way the author described, so that’s a problem. I’d have wanted to see more technical details on how that occurred, because it sounded like something that type safety should have protected him from.
Fair criticisms:
Unfair criticisms:
The conclusion gets a little too emotional for my taste.
Thanks for the detailed reply; the criticism of the article seems valid.
(As a minor point, the “PRs being open” criticism didn’t strike me as unsubstantiated because I’ve had enough similar experiences myself, but I can see how the article doesn’t argue that well. Certainly I’ve felt that it would be more honest/helpful for elm to not accept github issues/prs, or put a heavy disclaimer there that they’re unlikely to react promptly, and usually prefer to fix things their own way eventually.)
A lot of the things listed in the articles are things that have been explicitly done to make things harder for contributions to happen. The development of Elm has explicitly made choices to make things harder, and not in a merely incidental way.
This isn’t “the language is young” (well except for the debug point), a lot of this is “the language’s values go against things useful for people deploying to production”)
I don’t know, other than the point about the inability to write native modules and the longstanding open PR’s, all of the rest of the issues very much seem symptomatic of a young language.
The native module point sounds very concerning, but I don’t think I understand enough about elm or the ecosystem to know how concerning it is.
I’ve been vaguely following along with Elm, and the thng that makes me err on agreeing with this article is that the native module thing used to not be the case! It was removed! There was a semi-elegant way to handle interactions with existing code and it was removed.
There are “reasons”, but as someone who has a couple ugly hacks to keep a hybrid frontend + backend stack running nicely, I believe having those kinds of tricks are essential for bringing it into existing code bases. So seeing it get removed is a bit red flag for me.
Elm still has a lot of cool stuff, of course
I never relied on native modules, so I didn’t really miss them. But we now have ports, which I think is a much more principled (and interesting) solution. I felt that they worked pretty well for my own JS interop needs.
Stepping back a bit, if you require the ability do ugly hacks, Elm is probably not the right tool for the job. There are plenty of other options out there! I don’t expect Elm to be the best choice for every web front-end, but I do appreciate its thoughtful and coherent design. I’m happy to trade backward compatibility for that.
If you spend any amount of time in the Elm community you will find that contributions to the core projects are implicitly and explicitly discouraged in lots of different ways. Even criticisms of the core language and paradigms or core team decisions are heavily moderated on the official forums and subreddit.
Also how are we using the term “young”? In terms of calendar years and attention Elm is roughly on par with a language like Elixir. It’s probably younger in terms of developer time invested, but again this is a direct result of turning away eager contributors.
I think it’s fine for Elm to be a small project not intended for general production usage, but Evan and the core team have continually failed to communicate that intent.
I guess by now it’s useless to complain about how confusing it is that OCaml has two (three?) “standard” package managers; the ecosystem around the language is kind of infamous for having at least two of everything. I trust the community will eventually settle on the one that works the best. At least it looks like esy is compatible with opam libraries (though the reverse is not true), so it might have a good chance against opam.
Also this is kind of unrelated, but I’m really salty about ReasonML recommending JS’s camelCase over OCaml’s snake_case. This is one of the few rifts in the ecosystem that can’t really be fixed with time, and now every library that wants to play well with both OCaml and Reason/BS ecosystems will have to export an interface in snake_case and one in camelCase.
I second the choice to use JS’s camelCase for ReasonML as a salty/trigger point. It seems like a minor syntactic thing to make it more familiar for JS developers making the switch, but as someone who primarily writes Haskell for day job - camelCase is just less readable, IMO. Something I constantly am irritated that I even have to think about is casing acronyms consistently - which is avoided by snake_case or spinal-case - ie. runAWSCommand or runAwsCommand, setHTMLElement vs setHtmlElement - run_aws_command, set_html_element, etc.
The strangest thing for me is the “hey, there’s two mostly compatible syntaxes for this language we call ReasonML” but it’s mostly the same thing as Bucklescript from which we use the compiler anyway, except this, and this, and … oh and by the way, it’s all ocaml inside. What ?!
“Oh and also the docs for all these things (which you need) are all in completely different places and formats”
I think the ReasonML team wanted to match the conventions of JavaScript, where camel case is the norm.
I can see the annoyance though… and I have to wonder, is ReasonML syntax much better than OCaml’s? Was it really worth the break?
It’s not “better.” Yes, there are some cases where they’ve patched up some syntactic oddities in OCaml, but it’s mostly just change for the sake of being near JS.
Is it worth it? Depends. ReasonML and its team believe that OCaml failed to catch on because of syntax. If you agree, then yes, it’s worth it. And based on the meteoric rise I’ve seen on ReasonML, they may be right. That said, I believe, with good company, think OCaml didn’t catch on because it had two of everything, had really wonky package managers (and again, two of them), and still lacks a good multithreading story. In that case, no, the syntax just is change for no reason, and the only reason ReasonML is successful is because Facebook is involved.
I’m all for functional alternatives displacing JavaScript but my main frustration with ReasonML is that any niceities you gain from using it are outweighed by the fact that it’s just one more layer on top of an already complex, crufty, and idiosyncratic dev environment. I think that’s what’s holding OCaml back as much as anything else.
Some people seem to think that OCaml’s syntax is really ugly (I quite like it) and unreadable. I’m guessing they’re the same who complain about lisps having too many parenthesis.
ReasonML does fix a few pain points with OCaml’s syntax, mostly related semicolons (here, here, here), and supports JSX, but it also introduces some confusion with function call, variant constructor and tuple syntax (here, here, here) so it’s not really a net win IMO.
I think ReasonML was more of a rebranding effort than a solution to actual problems, and honestly it’s not even that bad if you disregard the casing. Dune picks up ReasonML files completely transparently so you can have a project with some files in ReasonML syntax and the rest in OCaml syntax. The only net negative part is the casing.
Esy and bsb are build orchestration tools, not package managers.
Esy is not OCaml-specific, it can e.g. include C++ projects as build dependencies. This is how Revery ( https://github.com/revery-ui/revery ) is being developed, for example. Esy also solves the problem of having to set up switches and pins for every project, with commensurate redundant rebuilds of everything. Instead, it maintains a warm build cache across all your projects.
Bsb specifically supports BuckleScript and lets it use npm packages. It effectively opens up the npm ecosystem to BuckleScript developers, something other OCaml tools don’t do (at least not yet).
Having ‘two of everything’ is usually a sign of a growing community, so it’s something I’m personally happy to see.
Re: casing, sure it’s a little annoying but if C/C++ developers can survive mixed-case codebases, hey, so can we.
“Semantic” and “HTML” don’t belong in the same sentence. HTML is a presentational markup — it describes things like headings and emphases and tables — and it was never really designed to carry meaning.
In a way it is disappointing that XSLT never took off, because then we could have served meaningful data through XML (which, for all its evils, is very easy to define, standardise and validate against schema definitions) and transform it into something pretty for humans using XSLT and then we wouldn’t have to worry so much about whether a11y devices or search engines can make sense of it.
Headings and emphases and tables describe semantic relationships. I’m not sure there are any presentational tags left in HTML5. Even
<b>
and<i>
were redefined in terms of semantic usage.I actually worked on old IE app that was all in on xsl and xslt server and client side. Xslt is an abomination. It works great for simple stuff. Start adding namespaces and versions to the schema and it falls apart completely. Has to do with having to match input namespaces in your xslt for whatever xml input you’re given iirc. I recall we had to add a step to all our inputs to strip namespaces off tags.
I have also worked with XSLT. I cannot blame anyone, as I chose that because It seemed the right tool for the task. It wasn’t. :)
I dodn’t even get to use namespaces, I already hit some hard walls and had to do terrible hacks to overcome its limitations.
I remember NetBeans had a somewhat adequate editor and maybe debugger for XSLT…
I’m happy XSLT didn’t take off.
A colleague of mine at Lonely Planet wrote a Ruby DSL (called RSLT, if memory serves) specifically to avoid having to deal with XSLT :)
If I’d understood XSLT better, I’d have made the DSL generate XSLT - the ruby-in-ruby DSL was a performance bottleneck we didn’t need.
The main problem I was trying to solve was ‘how do I encode several thousand similar rules, many of which are not yet known’. That’s a problem where the answer is basically always “create a new language”.
And since when having multiple repos implies using git submodules to handle them? In my experience, proper packaging and semantic versioning is what makes it easy to work with multiple repositories.
Of course that comes with additional bureaucracy, but it also fosters better separation of software components.
Sure, the mono-repository approach allows for a fast “single source of truth” lookup, but it comes with a high price, as soon as people will realize that they can also cut corners. Eventually it gets a pile of spaghetti.
(For the record, just in case you could not tell, I’ve got a strong bias towards the multi-repo, due to everyday mono-repository frustration.)
The flip side is with multi-repo you will amplify the Conway’s law mechanism where people tend to introduce new functionality in the lowest friction way possible. If it would be easier to do it all in one project that’s what will happen, even if it would be more appropriate to split the additions across multiple projects.
Introducing friction into your process won’t magically improve the skills and judgement of your team.
I once proposed an alternative to git-subtree that splits commits between projects at commit-time: http://www.mos6581.org/git_subtree_alternative. This should help handling of tightly-coupled repositoties, but requires client changes.
Why not just use a monorepo and make no client changes?
Because you want to share libraries with other projects.
Yes, there’s wisdom in what you say.
So, let me get this straight: Because your source control system doesn’t have innate knowledge of the linkages between your software components, that means it’s not up to the task of developing modern “cloud native” (God that term makes me want to cringe) applications?
I think not. Git is an ugly duckling, its UX is horrible but the arguments the author makes are awfully weak.
IMO expecting your VCS to manage dependencies is a recipe for disaster. Use a language that understands some kind of module and manage your dependencies there using the infrastructure that language provides.
well said. I dislike Git too but for different reason - the source code is somewhat of a mess
a hodgepodge of C, Python, Perl and Shell scripts
Git is the perfect project for a rewrite in a modern language like Go, Rust, Nim or Julia. A single Git binary similar to Fossil would make adoption and deployment much better.
I think at this point Git demonstrates that skipping on the single binary rhetoric doesn’t actually hamper adoption at all.
Bitkeeper was a single binary - and it had a coherent command set. I miss it.
It still exists and is licensed under Apache 2.0: https://www.bitkeeper.org/
The only issue you have is no public host other than bkbits supporting bk.
Also no support in most IDEs.
I know many will point to the command line but having integrated blame/praise, diff, history etc is awesome.
Honestly, the bigger beef I have with it is how little it comes with an installer and is difficult to package.
I think fossil has that…
I thought you are not supposed to use python because that would mean more dependencies… :P
Arguments are indeed weak, as what is “cloud native”? However, I think he’s onto something – maybe the problem is not just Git, but everything around it as well? I mean, one could create a big giant monorepo in Git, but the rest of the tooling (CI especially) will still do the full checkout and won’t understand that there are different components. Monorepos make a lot of sense, however, it seems to me that we’re trying to use popular tools to tackle the problem they are not meant to solve (that is, Git being a full replacement for SVN/SVK/Perforce and handling monorepos).
I don’t personally think monorepos make a lot of sense, and I think multi-repos are the way to go. If each separate piece is its own project and you let the language’s packaging / dependency management system handle the rest, I don’t see the problem.
Examples I can think of where my point applies are Python, Ruby, Perl or Java. Unless maybe you’re using a language with no notion of packages and dependencies - C/C++ perhaps? I don’t see the issue.
The friction in coordinating branches and PRs across multiple repos has been an issue on every team I’ve worked on. Converting to a monorepo has been a massive improvement every time I’ve done it. Either you’ve used hugely different processes or you’ve never tried using a monorepo.
That’s a symptom that the project is not split across the correct boundaries. This is not different from the monolith-vs-services issue.
Amazon is a good example of splitting a complex architecture. Each team runs one or very few services each with their repos. Services have versioned APIs and PRs across teams are not needed.
If you have a mature enough project such that every repo has a team and every team can stay in its own fiefdom then I imagine you don’t experience these issues as much.
But even so, the task of establishing and maintaining a coherent split between repos over the lifetime of a project is non-trivial in most cases. The multi-repo paradigm increases the friction of trying new arrangements and therefore any choices will tend to calcify, regardless of how good they are.
I’m speaking from the perspective of working on small to mid-sized teams, but large engineering organizations (like Amazon, although I don’t know about them specifically) are the ones who seem to gain the most benefit from monorepos. Uber’s recent SubmitQueue paper has a brief discussion of this with references.
That’s interesting. Every team I’ve ever worked on had its architecture segmented into services such that cross branches and PRs weren’t an issue since each service was kept separate.
The advantage of a monorepo is that a package can see all the packages depending on it. That means you can test with all users and even fix them in a single atomic commit.
The alternative in a large organisation is that you have release versions and you have to support/maintain older versions for quite some time because someone is still using them. Users have the integration effort whenever they update. In a monorepo this integration effort can be shifted to developer who changes the interface.
I don’t see how you could do continuous integration in a larger organization with multiple-repos. Continuous integration makes the company adapt faster (more agile with a lowercase a).
Even if you use a language that has good (or some) package support, breaking a project into packages is not always easy. Do it too soon, and it will be at the wrong abstraction boundary and get in the way of refactoring, and to correct you’ll have to either loose historic, or deal with importing/exporting, which ain’t fun.
But if all your packages/components are in a single repo, you’ll still might get the boundaries wrong, but the source control won’t get much in the way of fixing it.
100% on the surrounding tooling. CI tooling being based around Git means that a lot of it is super inflexible. We’ve ended up splitting repos just to get CI to do what we need it to do, and adding friction in surrounding processes.
A rethink of the ecosystem would be very interesting
Peak silicon valley capitalism: dying because the doctors couldn’t access very important info about you because the server with that info was turned off because you didn’t pay for the hosting.
I think peak Silicon Valley capitalism would be a free medical record host that profits off the data.
Or one that you pay but which sells the data anyway (23andMe).
Heh. Do me a favor, and do a quick search of software for your average doctor’s office or hospital, and let me know which one is the best.
I’m currently on my third stint in health care.
The stuff in a typical doctor’s office is not great, but I’d still take it over the average blockchain solution-in-search-of-a-problem any day of the week. The fundamental properties of a blockchain are the opposite of what you want for medical data. Blockchains have everything public and immutable by default and design. Medical data is private by law and must support corrections and errata. In fact, properly handling medical data often requires that you implement a time machine and be able to change history, then replay the new timeline forward.
Here’s an example: suppose there’s some ongoing treatment that requires documentation before claims on it can be paid, and the documentation doesn’t come in until after the first 4 claims. The first 4 claims would have been rejected, and now you have to rewind time, then replay those 4 claims and pay them.
Or say there’s a plan with a deductible: the first $500 of costs in the year are the patient’s responsibility, then the plan pays all claims after that. But a claim for something that happened early in the year doesn’t come in until later, after you think the deductible has been met. On many plans – including some of the US government-backed ones – you now have to start over, rewind time to the start of the year, and replay all the claims in chronological order, processing things according to what the deductible situation would have been if the claims had arrived in that order, and pull refunds from doctors you weren’t supposed to pay, order refunds to the patient from doctors who should have been paid by you, and reconcile the whole thing until the correct entities have paid the correct bills.
An append-only structure is fundamentally terrible at this unless you build a whole bunch of specialized stuff on top of it to treat later entries as addending, modifying or replacing earlier ones. And since at that point you’ve gone and built a mutable history structure on top of your immutable blockchain, why didn’t you just build the mutable history software in the first place and skip the blockchain? You’re not using it for any of the unique things it does.
And that’s just the technical/bureaucratic part of the problem. The social side of the problem is even worse. For example: sometimes it is incredibly important that a patient be able to scrub data out of their medical history, because that data is wrong and will influence or even prejudice doctors who see the patient in the future. Doctors who just ignore obvious symptoms and write down in the notes “it’s all in their head, refer to a psychiatrist” are depressingly common, and every future doctor will see those notes. When it turns out that doctor was wrong and there was a real problem, you do not want to have to fight with the next doctor who says “well, it’s here in your file that this was found to be psychosomatic”. You have to get that fixed, and it’s already hard enough to do without people introducing uncorrectable-by-design medical records (and no, merely putting a big “that doctor was wrong” addendum in the medical blockchain is not a real solution to this).
Compared to how much worse it could get with blockchain, the crappy hairballs of only-run-on-Windows-XP (or worse) software in a typical doctor’s office are downright pleasant.
this is the sort of thing I heard from Americans who work in health care when they reviewed the article ahead of time, yeah.
The big problem they flagged was data silos - lots of patient data trapped in systems that don’t talk to each other, and the ridiculous dificulty and expense of extracting your health record from your doctor (though passing your stuff to another doctor is apparently fine). You can see the blockchain pitch in there - “control your own data!” … not that it can offer a solution in practice.
It absolutely is not, at least technically, unless both doctors happen to use the same EMR, in which case it’s merely painful; or, if you’re extremely lucky, the same instance of the same EMR (for instance, half the health care in eastern Massachusetts uses Mass General’s EMR), in which case the experience is basically reasonable. Otherwise, you end up with some of the most absurd bullshit imaginable, that makes mailing paper charts seem reasonable in comparison; the best I’ve heard is a mailed CD containing proprietary viewing software in order to send imaging.
Interestingly, while “patients should own their own data” is a nice pitch, it’s actually somewhat problematic in practice. Health care providers may need to share information about a patient that patient would object to or should be kept unaware of (for instance, if a patient has been violent towards providers in the past, that information absolutely must be conveyed to any future providers that see them); and, like all professionals, health care providers use a lot of jargon in order to communicate clearly and precisely, which tends to make the chart incomprehensible to laypeople.
In the US, HIPAA provides a right to your medical records, similar (but not identical) to what a European would be familiar with from the GDPR. The gist of it is that you can make a request to any medical provider who’s treated you, and they have 30 days from the time of the request to provide you with a copy of your records. There are some exceptions (the most common exception is therapists’ notes), but not many.
I would guess that a lot of people probably don’t know they have this right, and probably a lot of medical providers aren’t forthcoming about making sure patients really understand their rights (they have to provide a notice of their privacy-related policies in writing, but a written notice in legalese is not the same as genuine understanding). A bigger problem is just that most people aren’t really able to look at medical records in their “standard” form and understand what they’re seeing.
And like the other commenter points out, interoperability between medical providers is not great. HIPAA allows medical providers to share information for treatment purposes, though, and the rules produce results that sometimes seem odd to modern tech people (for example, in the US the medical industry relies heavily on fax for sharing documents, because it’s often both the technically and legally simplest way to do so).
Maybe I’m missing something, but examples you give are related to health insurance, not medical records per se – those are two different concerns that are related, but the latter can exist without the former. Medical records are immutable if they store facts, even wrong diagnoses – after all, how do you figure our that some diagnosis is wrong – by someone else claiming the otherwise and providing supporting evidence. Further, medical records are not a single blob of information, they are more like tiny databases, for which we can have various ACLs for various pieces of information – IBM did quite a lot of work in that direction, IIRC. Nevertheless, blockchain is not the right tool, at least not for this domain.
Claims are medical records just like everything else.
But that depends on the definition what a medical record is, no? In socialist countries with universal healthcare, there is no such thing as claim that should be reimbursed or a plan with deductible. However, what is universal across the board is the state of body and mind, that is, all diagnoses and prescribed medications.
From this comment by @ubernostrum further up the chain:
This applies even without the baroque details of the US health insurance system. And even in countries with universal coverage, you still need to look out for fraud, fraudulent prescription of drugs, etc. The money comes from somewhere and it shouldn’t be wasted.
Here in Finland “universal” claims for things like medical pensions (whatever it’s called, disability retirement) are routinely denied. It’s tough, because people do try to abuse the shit out of it, but sometimes proper claims get denied. The processes for countering these claims are long and costly.
We also have systems within the same public health-care district that don’t talk to each other. The private franchises have handled that better, by asking for permission to share data, because it gives a better customer experience.
This is fortunately changing, but the data is now within a single point of failure, also duplicated in part for every relevant franchise.
Getting your data into the unified system incurs a cost. I don’t know if you can opt out of it, but you probably don’t want to, as the cost is not high, I think insurances cover it (transfer of wealth style) and it’s more convenient to check the records online than papers in a binder somewhere.
That is, for me, the key point. I have had a close relative get the wrong treatment for years because a doctor hastily put in an incorrect diagnosis and everyone after that just assumed it was correct.
Why did it take so long to have it edited out of her records? Because one symptom of that diagnosis is denying it. Once that diagnosis is in your records, whatever you say, the next doctor will just put in a note saying, “patient does not think she is suffering from X”.
So as far as I’m concerned, mutability of medical records is absolutely crucial. (Of course with a detailed log of operations visible only on court order or something.)
Blockchains are indeed append-only logs, albeit ones constructed in an interesting way.
And yet within a blockchain-based system state changes are made over time (Bitcoin balances change, CryptoKittes get new owners) by parsing the data contained within those logs.
In a medical system this means that records are indeed mutable/scrubbable. Want to fix a record? Post an update to the system’s blockchain. The record is the result of parsing the logs, so this updates the record. If you want a scrubbable log that’s also doable, although it does affect trust in the system in ways that take more thinking through than just “but GDPR!!!”.
All that said, like the OP I’m very wary of “control your data” pitches of all kinds. Don’t get me started on data-ownership UBI. ;-)
The Tetris analogy does not work here. Having a buried gap forces you to play faster instead and thus increases risk for more buried gaps.
Also, why can’t we chop the pieces in reality? In Tetris we can imagine the silver bullet to only use 1-block pieces.
The analogy does work insofar as technical debt (depending on its location) requires you to make more hacks to work around it which can spread throughout the system and eventually become unsustainable. You can think of the time pressure as representing the increase in effort required to ship a feature.
The equivalent of a one block piece would be a software component with no interrelationships or dependencies. If you can build a useful product all out of components like that then good on you.
I’m currently in the middle of rewriting a bunch of Elixir FP-ish code from a “functional” approach to look more OOPy, because the maintenance burden is just beyond stupid.
Everybody talks about how magical functional programming in, but nobody seems to be really speaking up yet about functional programming as she is practiced:
For all of its very many excesses, it’s so much easier to partition and expand on an OOP codebase than this “functional” nonsense. To wit:
Compare with:
OOP has so many problems, but I find it so much easier to get the size of a system and the page in and page out the parts I need into my working memory than to trace spaghetti for hours. I’m sick of people touting FP as a silver bullet when most of the shooters are cross-eyed.
I’ve long wondered when we’d figure out how to write FP so poorly we undo all of it’s supposed benefits. The same thing happened to OOP.
Unfortunately, there are very few computing paradigms out there for all the Medium thinkpieces that need to be written about how X paradigm is the worst.
Maybe programming isn’t the answer. Maybe we should just roll the clock back and have desks upon desks of people pushing paper around. It’d give more jobs for people that are not qualified to program, and reduce homelessness.
Of course, that’d make it harder to concentrate wealth in the hands of founders and execs and investors, but since I’m not seeing much of that, fuck’em.
Almost all of those bullet points are arguments in favor of static typing. Not arguments in favor of object oriented programming.
I’m speaking as someone who is terrible enough at programming to have written spaghetti code in both styles. It’s spaghetti code. Of course it sucks, and you don’t realize how much it sucks until several months later when you have to re-remember your terrible “design” choices, and, worse, have to extend it in a way that you did not plan for.
Static typing helps a great deal, but being able to define classes of objects and duplicate and tweak them is enormously useful–vulgar OOP isn’t totally wrong. Being able to communicate via messages, dispatch messages, inspect messages, store and replay messages–theoretical (Kay-esque) OOP also has a lot to offer.
I agree on the spaghetti code, but again will point how exasperating it is to see people celebrating FP as a cureall when most of the common devs seemingly can’t be trusted to scale it beyond a few tutorials. At least with OOP stuff you get a coloring book and if the devs are smart enough not to eat the crayons they can produce something that looks like the source material.
In other words, functional isn’t a programming paradigm. It isn’t a template for designing your system; it’s just a synonym for “computation without mutation and without backtracking*.” Which happens to contradict vulgar OOP, which has mutation, but that doesn’t change the fact that the proper counterpart to OOP would probably be something like Reactive Programming or Data-Oriented Design.
* With backtracking would be logic programming.
What has convinced you that the team that made your list of mistakes when attempting to follow the FP paradigm will do any better when attempting to follow the OOP paradigm?
Most programming problems are people problems. You’re only going to get truly good code if you commit to paying for it, up to and including formal code reviews wherein people can say “no, you can’t do that.”
Also: unityped functional programming is a special type of Hell. Learned that from my one (small) Clojure project and will never do it again.
This perfectly sums up the nagging feeling I’ve been having about FP, but haven’t been able to express. I have only ever heard people laude FP as a better paradigm, but I always felt that it would be so difficult to maintain the projects I see at scale in the real world and I see how difficult OOP is… I do not envy the hacks that are in place for large FP codebases. I can grok a OOP codebase (hell I can grok dissassembly), but the moment someone has given me FP code I spend more time trying to keep the entire thing in my head. To this day I don’t think I’ve been able to understand a single decently sized FP codebase.
There’s benefits and costs to both paradigms but the idea that either is “unusable” or “doesn’t scale” is genuinely laughable. The fact that you don’t use it, so you can’t figure out how to use it, isn’t actually any kind of argument for anything. Typed functional programming has been around since 1973, C was invented in 1972. People both less and more talented than either of us have used both OOP and FP, strongly and dynamically typed for decades at both small and gargantuan scales.
I said that entirely from my personal perspective and meant nothing of offense, your reaction and tone is exactly the reason that I tend to avoid even discussing my hesitation about programming paradigms. This could (and most likely is) be an entire failing on my end, but I have actively tried to learn and read large projects from both and friendlysocks experience mirrored mine. I wasn’t trying to slander, just merely mention that I struggle personally massively with the paradigm more than most everything in computer science. I can’t use it. I can’t understand how it scales to large teams.
In my experience it’s harder to teach an OOP programmer FP than a beginner. Part of this I believe has to do with how learning new paradigms is inherently humbling. The things that made you feel smart before now make you feel dumb. Dynamic typing for example makes me think, “Well I don’t know how anyone makes anything useful with this.”. In reality though the reason I feel this way is almost certainly because I’m missing pieces of knowledge of how to use languages like that effectively.
I think you could use it with practice. I think you could learn how to scale it to large teams. It’s okay that you don’t want to do or learn how to do either of those things but I wouldn’t internalize that decision as an inability. The people who are using functional programming at scale almost certainly aren’t smarter than you, just more patient and willing to feel dumb. Or more likely they didn’t happen to deep dive into an entire other paradigm where their previous intuitions are less useful.
While your post may have hurt my feelings to an extent and I’m sure that leaked out in my tone, my overarching goal was to dispel illusions of inadequacy.
I thought the conspiracy theory folks were wrong. It’s looking like they were right. Google is indeed doing some shady stuff but I still think the outrage is overblown. It’s a browser engine and Microsoft engineers have the skill set to fork it at any point down the line. In the short term the average user gets better compatibility which seems like a win overall even if the diversity proponents are a little upset.
If it’s an organization, you should always look at their incentives to know whether they have a high likelihood of going bad. Google was a for-profit companies aiming for IPO. Their model was collecting info on people (aka surveillance company). These are all incentives for them to do shady stuff. Even if they want Don’t Be Evil, the owners typically loose a lot of control over whether they do that after they IPO. That’s because boards and shareholders that want numbers to go up are in control. After IPO’s, decent companies start becoming more evil most of the time since evil is required to always make specific numbers go up or down. Bad incentives.
It’s why I push public-benefit companies, non-profits, foundations, and coops here as the best structures to use for morally-focused businesses. There’s bad things that can still happen in these models. They just naturally push organizations’ actions in less-evil directions than publicly-traded, for-profit companies or VC companies trying to become them. I strongly advise against paying for or contributing to products of the latter unless protections are built-in for the users with regards to lock-in and their data. An example would be core product open-sourced with a patent grant.
Capitalism (or if you prefer, economics) isn’t a “conspiracy theory”. Neither is rudimentary business strategy. It’s amusing to me how many smart, competent, highly educated technical people fail so completely to understand these things, and come up with all kinds of fanciful stories to bridge the gap. Stories about the role and purpose of the W3C, for instance.
Having read all these hand-wringy threads about implementation diversity in the wake of this EdgeHTML move, I wonder how many would complain about, say, the lack of a competitor to the Linux kernel? There’s only one kernel, it’s financially supported by numerous mutually distrustful big businesses and used by nearly everybody, its arbitrary decisions about its API are de-facto hard standards… and yet I don’t hear much wailing and gnashing, even from the BSD folks. How is the linux kernel different than Chromium?
While I actually am concerned about a lack of diversity in server-side infrastructure, the Linux kernel benefits, as it were, from fragmentation.
This simply isn’t true. There’s only one development effort to contribute to the kernel. There is, on the other hand, many branches of the kernel tuned to different needs. As somebody who spent his entire day at work today mixing and matching different kernel variants and kernel modules to finally get something to work, I’m painfully aware of the fragmentation.
There’s another big difference, though, and that’s in leadership. Chromium is run by Google. It’s open source, sure, but if you want your commits into Chromium, it’s gonna go through Google. The documentation for how to contribute is littered with Google-specific terminology, down to including the special internal “go” links that only Google employees can use.
Linux is run by a non-profit. Sure, they take money from big companies. And yes, money can certainly be a corrupting influence. But because Linux is developed in public, a great deal of that corruption can be called out before it escalates. There have been more than a few developer holy wars over perceived corruption in the Linux kernel, down to allowing it to be “tainted” with closed source drivers. The GPL and the underlying philosophy of free software helps prevent and manage those kinds of attacks against the organization. Also, Linux takes money from multiple companies, many of which are in competition with each other. It is in Linux’s best interest to not provide competitive leverage to any singular entity, and instead focus on being the best OS it can be.
Performance tuning is qualitatively different than ABI compatibility. Otherwise, I think you make some great points. Thanks!
If there is an internal memo at Google along the lines of “try to break the other web browsers’ perf as much as possible” that is not “rudimentary business strategy”, it’s “ground for anti-trust action”.
It’s as good of a strategy as helping the Malaysian PM launder money and getting a 10% cut (which… hey might still pay off)
Main difference is that there are many interoperable implementations of *nix/SUS/POSIX libc/syscall parts and glibc+Linux is only one. A very popular one, but certainly not the only. Software that runs on all (or most) *nix variants is incredibly common, and when something is gratuitously incompatible (by being glibc+Linux or MacOS only) you do hear the others complain.
If by “runs on” you mean “can be ported to and recompiled without major effort”, then I agree, and you’re absolutely right to point out the other parts of the POSIX and libc ecosystem that makes this possible. But I can’t think of any software that’s binary compatible between different POSIX-ish OSs. I doubt that’s even possible.
On the other side of the analogy, in fairness, complex commerical web apps have long supported various incompatible quirks of multiple vendor’s browsers.
Multiple OSs, including Windows, can run unmodified Linux binaries.
As you just said it,
There’s no one company making decisions about the kernel. That’s the difference.
Here comes fuchsia and Google’s money :/
I am disgusted with the Linux monoculture (and the Linux kernel in general), even more so than with the Chrome monoculture. But that fight was fought a couple decades ago, it’s kinda late to be complaining about it. These complaints won’t be heard, and even if they are heard, nobody cares. The few who care are hardly enough to make a difference. Yes we have the BSDs (and I use one) and they’re in a minority position, kinda like Firefox…
How much of a monoculture is Linux, really? Every distro tweaks the kernel at least to some extent, there are a lot of patch sets for it in the open, and if you install a distro you get to choose your tools from the window manager onwards.
The corporatization of Linux is IMO problematic. Linus hasn’t sent that many angry emails percentually, but they make the headlines every time, so my conspiracy theory is that the corporations that paid big bucks for board seats on the Foundation bullied him to take his break.
We know that some kernel decisions have been made in the interest of corporations that employ maintainers, so this could be the tip of an iceberg.
Like the old Finnish saying “you sing his songs whose bread you eat”.
I think this is true. If Google screws us over with Chrome, we can switch to Firefox, Vivaldi, Opera, Brave etc and still have an acceptable computing experience.
The real concerns for technological freedom today are Google’s web application dominance and hardware dominance from Intel. It would be very difficult to get a usable phone or personal server or navigation software etc without the blessing of Google and Intel. This is where we need more alternatives and more open systems.
Right now if Google or Intel wants to, they can make your life really hard.
Do note that all but Firefox are somewhat controlled by Google.
Chrome would probably have been easier to subvert if it wasn’t open source; now it’s a kind of cancer in most “alternative” browsers.
I don’t know. MIPS is open sourcing their hardware and there’s also RISC-V. I think the issue is that as programmers and engineers we don’t collectively have the willpower to make these big organizations behave because defecting is advantageous. Join the union and have moral superiority or be a mercenary and get showered with cash. Right now everyone chooses cash and as long as this is the case large corporations will continue to press their advantage.
“Join the union and have moral superiority or be a mercenary and get showered with cash. Right now everyone chooses cash and as long as this is the case large corporations will continue to press their advantage.”
Boom. You nailed it! I’ve been calling it out in threads on politics and business practices. Most of the time, people that say they’re about specific things will ignore them for money or try to rationalize how supporting it is good due to other benefits they can achieve within the corruption. Human nature. You’re also bringing in organizations representing developers to get better pay, benefits, and so on. Developers are ignoring doing that more than creatives in some other fields.
Yup. I’m not saying becoming organized will solve all problems. At the end of the day all I want is ethics and professional codes of conduct that have some teeth. But I think the game is rigged against this happening.
I don’t think RISC-V is ready for general purpose use. Some CPUs have been manufactured, but it would be difficult to buy a laptop or phone that carries one. I also think that manufacturing options are too limited. Acceptable CPUs can come from maybe Intel and TSMC and who knows what code/sub-sytems they insert into those.
This area needs to be more like LibreOffice vs Microsoft Office vs Google Docs vs others on Linux vs Windows vs MacOS vs others
They already are screwing us over with chrome, this occurrence is evidence of that.
You don’t have to write JavaScript, you have to write Elixir - which has a much smaller community around it than JavaScript does.
This does look cool though, I just wish there were some live examples I could play with in my browser.
On the other hand, the Elixir community is very friendly. :)
Supposedly something like LiveView is coming to .NET - https://codedaze.io/introduction-to-server-side-blazor-aka-razor-components/ - but the post says:
In principle, people could take this approach in other languages as well. But I think Elixir / Erlang are uniquely positioned to do it well, as LiveView is built on Phoenix Channels, which (because they use lightweight BEAM processes) can easily scale to keep server-side state for every visitor on your site: https://phoenixframework.org/blog/the-road-to-2-million-websocket-connections
Is that comment supposed to contrast the friendly Elixir community with the JS community? Is the JS community considered unfriendly? It’s way, way bigger than the Elixir community, so there are bound to be some/more unfriendly people. Maybe it’s so big that the concept of a “JS community” doesn’t even make sense. It’s probably more like “Typescript community”, “React community”, “Node community”, etc… But there are a lot of friendly people and helpful resources out there in JS-land, in my experience. I hope others have found the same thing.
The Elixir community is still in the “we’re small and must be as nice as possible to new people so they’ll drink the koolaid” phase. The “community” such as it is is also heavily pulled from job shops and the conference circuit, so there’s a big factor too.
Past the hype it’s a good and servicable language, provided you don’t end up on a legacy codebase.
Sounds like Rails, all over again.
Who hurt you @friendlysock?
How would you define ‘legacy codebase’? I’m assuming it’s something other than ‘code that is being used to turn a profit’..
Ha, you’re not wrong! I like that definition.
From bitter experience, I’d say it would be an Elixir codebase, written in the past 4 or 5 years, spanning multiple major releases of Ecto and Phoenix and the core language, having survived multiple attempts at CI and deployment, as well as hosting platforms. Oh, and database drivers of varying quality as Ecto got up to speed. Oh oh, and a data model that grew “organically” (read: wasn’t designed) from both an early attempt at Ecto as well as being made to work with non-Ecto-supported DB backends, resulting it truly delightful idioms and code smells.
Oh, and because it is turning a profit, features are important and spending time doing things that might break the codebase are somewhat discouraged.
Elixir for green-field projects is absolutely a joy…brown-field Elixir lets devs just do really terrible heinous shit.
Totally agree, but I would say that significantly more heinous shit is available to devs in Ruby or another dynamic imperative language. The Elixir compiler is generally stricter and more helpful, and most code is just structured as a series of function calls rather than as an agglomeration of assorted stateful objects.
The refactoring fear is real though. IMO the only effective salve for that sickness is strong typing (and no, Dialyzer doesn’t count).
So you’re saying that Elixir is just another programming language? It’s not the Second Coming or anything?
I mean, it’s really quite good in a number of ways, and the tooling is really good. That said, there’s nothing by construction that will keep people from doing really unfortunate things.
So, um, I guess to answer your question: yep. :(
😊 I can see how it sounded that way, but I didn’t mean to imply anything about anyone else. The parent post said the Elixir community is small, so I was responding to that concern.
I feel you’re just trying to polemic on the subject… The author of this comment probably didn’t mean harm, don’t make it read like so.
I’m not what you mean by “trying to polemic”, that doesn’t make sense to me as a phrase, but it was a genuine question about whether the JS community is considered to be unfriendly. I’d be happy to be told that such a question is off-topic for the thread, and I certainly don’t want to start a flame war, but I didn’t bring up the friendliness of the community. I’m sure the author didn’t mean harm, but I read (perhaps incorrectly) that part of their reply as part of an argument for using Elixir over JS to solve a problem.
What I meant to say was: “If this looks like it could be a good fit for thing you want to do, but you’re daunted by the idea of learning Elixir, don’t worry! We are friendly.”
I meant starting a controversy, sorry for my poor English! I’m sorry if it felt harsh, that wasn’t what I tried to share. I really thought your goal was to start this flame war.
Every community has good and bad actors. Some people praise a lot some communities, but I don’t think they mean the others aren’t nice either.
The only thing that I could think of is that smaller communities have to be very careful with newcomers, because it helps to grow the community. JS people don’t need to be nice with each other, the community and the project are way pas that need. So I guess you would find a colder welcome than with a tiny community.
Hey there, polemic is a legit English word, so don’t be sorry for someone else’s ignorance! :)
I’m not ignorant (well I am, but not about this): polemic is indeed an English word, but it’s not a verb. The phrase “trying to polemic” doesn’t make sense in English, it requires interpretation, which makes the meaning unclear. I can think of two interpretations for “trying to polemic” (there may be others) in the context of the comment:
The thing is that not everyone is at your level of English proficiency. You’re having a discussion here with people from around the world, you’ll need to make a couple of adjustments for expected quality of English and try to get the rough meaning of what they’re saying, otherwise you’ll be stuck pointing out grammatical errors all day.
I wasn’t really trying to point out an English error, and perhaps I did a poor job of that. I stand by the claim that it is an English error though.
I work with non-native English speakers all day, I’m aware of the need to try and understand other people and to make sure we’re on the same page. I’ll give a lot of slack to anyone, native or non-native, who’s trying to express themselves. The problem with the phrase “I feel you’re just trying to polemic on the subject’ is that at least some of the interpretations change the meaning. On the one hand, it could be saying that my comment was polemic, on the other it could be saying that my comment was trying to start a polemical thread. It’s not the same thing. And, for what it’s worth, if you’re going to throw an uncommon (and quite strong) English word like “polemic” out there it’s best if you correctly understand the usage. If the author had accused me of trolling, which is I think what they meant, that would have been both clearer and more accurate (though my intent was not to troll)