I have a very serious suspicion that the industry’s move to flat designs – for everything, not just website UI elements, which the article analyzes – is, in part, economically-motivated. It’s not a quality judgement per se (although I certainly don’t like flat UIs, but people who grew up with them might think differently), but it’s an observation I’ve made over several years.
About 20 years ago, when I got my first gig in the computer industry, one of the people I worked with was an icon designer at a media agency. He did other things besides drawing icons, of course, but whenever the agency he worked with needed an icon for an application, or a set of buttons and icons for a website (this was the peak of the Flash era so this was very common), he was the man.
For particularly challenging contracts – e.g. OS X applications, which had those amazing, detailed icons – it could sometimes take two or three weeks for several icons to be drawn and the “winner” to be picked and retouched. The price for all that effort was the kind of money that, provided you outsource it someplace cheap, can get you a whole mobile app written today. And that was the sort of time frame (and budget!) involved in any major design project.
It was also very hard to string things together. The agency this guy worked with made a lot of money out of what they called “design kits” – ready-made buttons, icons and whatnot that were specifically design so that they were easy to customize. That was because getting inexperienced people to make a good UI out of photorealistic buttons and 3D buttons usually got you mad bling and crashes, rather than an image of the cyberfuture.
But that was also at a time when you could charge 25$ for a shareware applications and it sounded like a decent price (and this was 2002 dollars, not 2020 dollars). Most applications today – sold for 1.99$ on an AppStore or, more commonly, in exchange for personal data that may or may not turn into profit eventually – are developed on a budget that 2002-era design would gobble up in no time.
So of course most icons are basically a blob of colour with an initial on them and all buttons are flat rectangles with an anonymous, black-and-white symbolic icon on them: the kind of time frames and budgets involved don’t leave room for anything more advanced than that. This not only takes care of the money problem, it also makes it a lot easier to work with “design kits”, and also makes it less likely for brofounders to get mad bling instead of something that they can at least demo to potential investors when, instead of paying experienced designers at market rates, they hire college dropouts for peanuts.
I don’t want to claim that this is an universal explanation. I’m sure that, especially after Apple and Google made it fashionable, a lot of companies adopted flat design in good faith, and that a lot of experienced designers adopted it both in good faith and for their own subsistence (I’m still in touch with that guy, and while he also thinks most flat designs are crap, he will also gladly come up with one for his paying customers).
But I also doubt that flat design is all about good usability and user friendliness, especially since, as soon as you ask about usability studies (real ones, not “I asked twenty people on the hallway oh and I asked my dad and my mom and my sister and they’re really not computer nerds”) and numbers everyone starts ranting about how these things never paint a realistic picture of usage patterns and usability has a certain Zen that can’t be captured by raw numbers and experiments and we don’t, like, really understand how perception and the human brain work…
National Instruments produces the LabView visual programming language. Every operator and function pretty much has its own icon. See an example of a LabView program. They had full time icon designer(s) to make those. It’s a hard job, especially for niche, technical, or abstract concepts. How would you create an icon for a parallel while loop?
Users could create custom icons for their “functions”. One interesting consequence is that library writers could create visually distinctive, branded icons for their functions.
It’s cool that you mention LabView because one of the reasons why their icons are so great is that, if you’ve done non-virtual instrumentation before, most of them make sense. They aren’t some abstract symbol thing – for example, variable inputs have tiny beveled buttons and display on, what, 48x48px icons? They kindda look like the real-life devices or blocks that they represent. And those that are just black and white symbols tend to follow decades-old standards for electrical symbols, logical diagrams etc., which everyone can look up, or knows by now. (Not because they’re intuitive, btw, or because they follow some unconscious interpretative impulses that have been ingrained in our brains since time immemorial but because they drill them into you through four years of engineering school :) ).
The sensation of downward pressure being placed on the economics in the context of creating software has many parallels:
The goal is always the same: commoditization.
I’m not so sure about this. A lot of the “flat UI trend” was started by Microsoft and Google – others followed. And you can make “UI packs” which are not flat icons with just as much ease.
My own theory is that it’s essentially a confusion between graphic design and user interface design. People who do either are often just called “designers”, but they’re quite different disciplines: one is about making things look good, the other about making them work well.
I’ve worked with quite a few designers over the years who made great looking designs which … were not all that great to use, and not infrequently got very defensive about small changes to make it easier to use “because muh art”.
This is not intended as a condemnation of graphical designers, but graphical design is an art skill whereas UI design is engineering skill. Ideally, both should work together to create a great-looking UI that works well.
Why are companies like Google making unusable flat-UI designs with lightgrey hairline thing text on a white background? I don’t know … In principle they should have enough money to hire UI designers and should be knowledgable enough to know the difference between graphic and UI design. I guess fashion is one hell of a drug?
And you can make “UI packs” which are not flat icons with just as much ease.
It’s even easier – but they’re also pretty useless. What would you customize about it? A while back there was an entire debate about the right way to represent buttons: “clean” vs. “outlined” bevel (i.e. just the bevel, as Windows 95 did it, or a black outline around the bevel, a la Amiga’s MUI?), curved vs. non-curved lit edges, harsh vs. soft gradients and so on, and each one fit one kind of application better than another, at least in aesthetic terms. I don’t know if those discussions made sense or not, who knows, maybe it was just in order to make it look like the money you paid was worth it :). But what would a “design pack” consist of, when all you get to customize about a button are the color, padding, and font?
You can sort of see that in tools like oomox (https://github.com/themix-project/oomox). It’s not exactly a design kit, and it answers a completely different problem (GTK being obtuse), but it’s close enough.
So of course most icons are basically a blob of colour with an initial on them and all buttons are flat rectangles with an anonymous, black-and-white symbolic icon on them: the kind of time frames and budgets involved don’t leave room for anything more advanced than that.
not even text? we are stuck at pictograms now?!
You’re kidding but six or seven years ago, when I last worked on a consumer device, the design team insisted that there should be as little text as possible. First, because it looks intimidating and is not user-friendly. Second, because graphical representations allow us to establish a better brand presence for us – our own graphical design language, as opposed to generic text, would make our identity more recognizable. The gizmo’s PM had no strong opinions on the first point, but loved the second one, plus as little text as possible meant as little internationalization as possible – so, yep, it was pictograms everywhere.
Pictograms don’t always make localisation easy. The classic example of this is the choice a bunch of companies made in the late ‘80s to use an owl as the help symbol. Owls are traditionally associated with wisdom in European culture, but in China they’re associated with bravery and in Latin America with black magic and evil. Localisation translated the text, but they got a lot of support calls from people who didn’t realise that the brave / evil animal was the one you clicked on for text. Translating words is generally much easier than translating cultural symbols. A lot of pictures have a variety of different cultural meanings and picking a replacement that matches the particular association that you meant can be a tricky problem.
I think I brought up the exact same example with the owl, and all I got was “well we’re not gonna use an owl then, we’re going to make sure we use only symbols with non-controversial meanings” :).
That department, despite masquerading as an engineering group, was effectively the cargo cult arm of the Steve Jobs Fan Club, and virtually all decisions were made by non- or semi-technical people based on whether or not they thought Apple would have done it the same way. I could write… well maybe not a book, because my patience (which I’d previously thought to be virtually neverending) lasted less than an year, but at least two or three chapters, about all the hilarious disasters that this process resulted in. Thankfully, most of the products I worked on during that time didn’t even make it past the prototyping phase, so the world only got to laugh at one of them, I think.
(Edit: FWIW, it’s very likely that I did a lousy job at advocating for these things, too. By the time that happened I had long lost any kind of patience or trust in the people making these decisions, or their higher-ups, plus at the end of the day it really wasn’t my job, either. I heard it was pretty hard to argue with Steve Jobs when he was alive, too; arguing with a handful of people possessed by his spirit was an uphill battle that I most certainly lost.).
10 selling points, no downside, that’s fishy. No mention of the lack of static typing, for example. No mention of Kotlin or Scala. That sounds like someone who’s a bit too enthusiastic to acknowledge the shortcomings that any tool has.
The lack of static typing as a downside is, and will always be, subjective. As Rich Hickey famously says, “What’s true of every bug? 1) it passed your unit tests 2) it passed the type checker.”
Spec, as a structural testing mechanism, is I believe generally understood to be a preferable alternative to static typing, not an afterthought bolted on to appease devs that demand static typing.
I don’t understand what the quote is trying to say. “Every bug” is still fewer bugs if type bugs are ruled out by a type checker.
My reading is that static typing isn’t a silver bullet. That’s an over simplification. See the video below for a bit more context surrounding that particular quote, and maybe some rationale behind clojure’s approach to typing and the role that Spec plays.
It’s amusing to me that people accept this as a critique of static typing in general rather than a critique of certain hilariously-bad type systems like that of C or Java.
I am trying to think of the last time I encountered a bug in a piece of software written in a language with a not-terrible static type system, but … I can’t think of one.
Right.
I mean obviously part of the reason I haven’t run into many bugs is that there just aren’t as many programs written using good type systems. Programs written in Haskell or OCaml or Elm still have bugs, but they only have certain classes of bugs. Just because you can’t get rid of every bug doesn’t mean it’s pointless to get rid of the ones you can, and Rich’s punchy quote seems to imply that this line of reasoning is invalid.
I see what you’re saying. And I agree that, on the surface, that quote seems like it’s dumping on static typing entirely, but I don’t think that’s the case. In the talk in which Hickey drops that quote he expands a on it a bit, to a degree that my simply slapping it into a response doesn’t do justice. You’re the Leiningen guy, you know Clojure, you’re presumably familiar with the talk.
My takeaway was that static typing, like unit tests, catches bugs. Certain classes of bugs (as you mention above). However, in some complex and complicated systems, those classes of bugs aren’t the primary concern. I’ve never worked in that kind of system, but I like what I’ve seen of clojure and I haven’t found this particular line of reasoning disagreeable.
You’re the Leiningen guy, you know Clojure, you’re presumably familiar with the talk.
Yep, I’ve seen the talk. It feels to me like he already decided he doesn’t like static typing because of bad experiences with Java and uses that experience to make straw-man arguments against all static type systems in general. I can imagine the existence of a system for which “bugs that no type system can catch” are the primary concern, but I’ve never encountered such a system myself. (I have worked with plenty of systems where the cost of the type system is higher than the benefit of squashing those bugs, but that’s a very different argument than Rich’s.)
Yep, I’ve seen the talk. It feels to me like he already decided he doesn’t like static typing because of bad experiences with Java and uses that experience to make straw-man arguments against all static type systems in general
There seems be at least some evidence that he knows Haskell pretty well, he just doesn’t really publicize it.
I think it’d be really funny if he keynoted ICFP with a talk on algebraic effects and never mentions the talk ever again. Will never happen, but a guy can dream.
I’m pretty skeptical that anyone who “knows Haskell pretty well” would produce the “Maybe Not” talk. More generally, my experience is that anyone willing to invest the time and energy into learning Haskell tends to be more tuned-in to the actual tradeoffs one makes when using the language, and it’s clear that his emotionally-framed talking points overlap very little with what actual Haskell programmers care or think about. Of course, it could be the case that he does know the language pretty well and simply talks about it like a frustrated novice to frame emotional talking points to appeal to his Clojure true-believers, but this seems far-fetched.
IMO the “Maybe Not” talk gets more flak than it deserves. Function subtyping is a valid data representation and Maybe Int -> Int
can be represented as a subtype of Int -> Maybe Int
. Haskell chooses not to allow that representation, and it is a way in which the type system is arguable incomplete (in the “excludes valid programs” sense.)
You’ll have to work hard to convince me that Rich Hickey was arguing on the level of critiquing the “function sub-typing” capabilities of Haskell’s type system vs. the more prosaic “static typing bad!” bullshit he falls back on again and again. Stated straightforwardly, his argument is basically “Maybe is bad because it means you will break the expectations of the caller when you need to introduce it to account for optionality.” And, I suppose it is in fact terrible when you don’t have a type-checker and depend on convention and discipline instead. So, Rich Hickey 1, static typing 0, I guess?
As to “Maybe Not” getting more flak than it deserves…yeah, we’ll have to agree to disagree. (And I’ll note here I’m really surprised that you in particular are taking this position considering how often I see you on here deeply engaging with such a broad variety of academic computer science topics, without needing to use strawmen or appeal to emotion to argue a position.)
For example, “Maybe/Either are not type system’s or/union type.” Okay. How do you even argue with that? I don’t even really understand what he’s trying to assert. Does he not believe the Curry-Howard correspondence is valid? For that matter, which type system? Will I get lambasted by the apologists for not understanding his super subtle point, yet again? Meh.
Someone who was honestly and deeply engaged with the ideas he spends so much time critiquing wouldn’t be babbling nonsense like “you would use logic to do that, you wouldn’t need some icky category language to talk about return types” or “type system gook getting into spec”…god forbid!
I’ll give him this though: Rich Hickey is doing a great job of convincing the people who already agree with him that static typing is bad.
(Edited to fix some links and a quote)
As to “Maybe Not” getting more flak than it deserves…yeah, we’ll have to agree to disagree. (And I’ll note here I’m really surprised that you in particular are taking this position considering how often I see you on here deeply engaging with such a broad variety of academic computer science topics, without needing to use strawmen or appeal to emotion to argue a position.)
Thank you! I should note that just because I deeply engage with a lot of different CS topics doesn’t mean I won’t flail around like an idiot occasionally, or even often.
For example, “Maybe/Either are not type system’s or/union type.” Okay. How do you even argue with that? I don’t even really understand what he’s trying to assert. Does he not believe the Curry-Howard correspondence is valid? For that matter, which type system? Will I get lambasted by the apologists for not understanding his super subtle point, yet again? Meh.
Let me see if I can explain this in a less condescending way than the talk. Let’s take the type Maybe Int, which I’ll write here as Maybe(ℤ). From a set theory perspective, this type is the set {Just x | x ∈ ℤ} ∪ {Nothing}
. There is an isomorphism from of the maybe type to the union type ℤ ∪ {Nothing}
. Let’s call this latter type Opt(ℤ). Opt(ℤ) is a union type in a way that Maybe(ℤ) is not, because we have ℤ ⊆ Opt(ℤ) but not ℤ ⊆ Maybe(ℤ). 3 ∈ ℤ, 3 ∉ Maybe(ℤ). Again, we have Just 3
∈ Maybe(ℤ), and an isomorphism that maps Just 3 ↦ 3
, so in theory this isn’t a problem.
The problem is that Haskell’s type system makes design choices that makes that isomorphism not a valid substitution. In fact, I don’t think Haskell even has a way of represent Opt(ℤ), only its isomorphism. Which means that we can’t automatically translate between “functions that use Opt(ℤ)” and “functions that use Maybe(ℤ)”. Take the functions
foo :: Maybe Int -> Int
foo Nothing = 0
foo (Just x) = x*x
-- I don't think this is possible in Haskell, just bear with me
bar :: Opt Int -> Int
foo Nothing = 0
foo x = x * x
Is map foo [1..10]
type-safe? Not in Haskell, because map foo
has type [Maybe Int] -> [Int]
and [1..10]
has type [Int]
. Is map bar [1..10]
type-safe? In a type system that supported “proper” union types, arguably yes! ℤ ⊆ Opt(ℤ), so map bar
is defined for all [Int]
. So maybe types emulate useful aspects of union types but, in the Haskell type system, don’t have all the functionality you could encode in union types.
Now there are two common objections to this:
map foo
type safe. This is usually people’s objection to this talk. And most of the time you can do this. But this is just a form of emulation, not reproducing the core idea. It’s similar to how in OOP you can “emulate” higher-order functions with the strategy pattern. But it’s not a perfect replacement. For any given emulation I can probably construct a example where your emulation breaks down and you have to try something slightly different. Maybe it doesn’t work if I’m trying to compose a bunch of fmaps
.This is why I think the talk is underrated. There’s a lot of genuinely interesting ideas here, and I get the impression Rich Hickey has thought a lot of this stuff through, but I think it’s hampered him presenting this ideas to a general clojure audience and not a bunch of type-theory nerds.
I don’t follow: map foo [1..10]
wouldn’t even typecheck; it’s not even wrong to say it’s not typesafe (edit: and apologies if that’s what you meant, I don’t mean to beat you over the head with correct terminology, I just honestly didn’t get it). And while it’s well and fine that there’s an isomorphism between {Just x | x ∈ ℤ}
and ℤ
, it’s not clear to me what that buys you. You still have to check your values to ensure that you don’t have Nothing
(or in the case of Clojure, nil
), but in Haskell, because I have algebraic data types, I can build abstractions on top to eliminate boilerplate. Your Opt
example doesn’t present as better or highlight the power of this isomorphism. Why do I even care that I can’t represent this isomorphism in Haskell? I’m afraid your post hasn’t clarified anything for me.
As far as the talk, I think that if he had some really interesting ideas to share, he’d be able to explain them to type theory nerds in the same talk he gives to his “core constituency.”
At this point, I have trouble considering the output of someone who, no matter how thoughtful they may be, has made it clear that they are hostile to certain ideas without justifying that hostility. There is plenty of criticism to be made of Haskell and type theory without taking on the stance he has taken, which is fundamentally “type theory and type systems and academic PLT is not worth your time to even consider, outside of this narrow range of topics.” If he was a random crank that’d be fine, but I think that because of his position, his presentations do real harm to Clojure programmers and anyone else who hears what he’s saying without having familiarity with the topics he dismisses, because it shuts them off to a world of ideas that has real utility even if it’s very much distinct from the approach he’s taken. It also poisons the well for those of us in the community who have spent serious time in these other worlds and who value them, which is a tremendous shame, because Rich Hickey does have a lot of great ideas and ways of presenting them intuitively. So regardless of how eloquently you may be able to translate from Rich Hickey-ese, what I object to fundamentally is his intellectual attitude, moreso than his ideas, many of which I agree with.
Thank you! I should note that just because I deeply engage with a lot of different CS topics doesn’t mean I won’t flail around like an idiot occasionally, or even often.
Well understood from personal experience. ;-)
I don’t follow:
map foo [1..10]
wouldn’t even typecheck; it’s not even wrong to say it’s not typesafe (edit: and apologies if that’s what you meant, I don’t mean to beat you over the head with correct terminology, I just honestly didn’t get it).
That’s what I meant, it wouldn’t typecheck. Brain fart on my part.
And while it’s well and fine that there’s an isomorphism between {Just x | x ∈ ℤ} and ℤ, it’s not clear to me what that buys you. You still have to check your values to ensure that you don’t have Nothing (or in the case of Clojure, nil) … Your Opt example doesn’t present as better or highlight the power of this isomorphism.
Let’s try a different tack. So far we have
foo :: Maybe Int -> Int
bar :: Opt Int -> Int
Now I give you three black-box functions:
aleph :: Int -> Maybe Int
beis :: Int -> Opt Int
gimmel :: Int -> Int
foo aleph
typechecks, as does bar beis
. foo gimmel
doesn’t typecheck. I think all three of those we can agree on. Here’s the question: what about bar gimmel
? In Haskell that wouldn’t typecheck. However, we know that Int ⊆ Opt Int. gimmel
’s codomain is a subset of bar
’s domain. So bar
must be defined for every possible output of gimmel
, meaning that bar gimmel
cannot cause a type error.
This means that because Haskell cannot represent this isomorphism, there exists functions that mathematically compose with each other but cannot be composed in Haskell.
Why do I even care that I can’t represent this isomorphism in Haskell?
Mostly this is a special case of function subtyping, which I don’t think Haskell supports at all? So if function subtyping makes your problem domain more elegant, it’d require workarounds here.
To be clear I understood your point initially about function subtyping being a thing that maybe Haskell can’t represent, and I apologize for making you come up with multiple creative examples to try to illustrate the idea (but I appreciate it)!
So if function subtyping makes your problem domain more elegant, it’d require workarounds here.
What remains unclear for me–if we’re treating this as a proxy for Rich Hickey’s argument–is how this demonstrates the practical insufficiency of Maybe
, which is his main pitch. I am happy to acknowledge that there are all kinds of limitations to Haskell’s type system, this is no surprise. What I don’t yet understand is why this is a problem wrt Maybe
!
In any case, thank you for the thoughtful responses.
Static typing, in my uninformed opinion, is less about catching bugs than it is about enforcing invariants that, as your application grows, become the bones that keep some sort of “tensegrity” in your program.
Without static typing your application is way more likely to collapse into a big ball of mud as it grows larger and none of the original engineers are working on it anymore. From this perspective I suppose the contract approach is largely similar in it’s anti-implosion effect.
However, type theory offers a whole new approach to not only programming but also mathematics and I think there is a lot of benefit we still haven’t seen from developing this perspective further (something like datafun could be an interesting protobuf-esque “overlay language” for example).
On the other hand, dynamic programming (I think) peaked with lisp, and clojure is a great example of that. A lightweight syntax that is good for small exploratory things has a lot of value and will always be useful for on the fly configuration. Doesn’t change that the underlying platform should probably be statically typed.
Static typing is mentioned in section 8 in regards to the spec
library, and Scala is named in the Epilogue as a mature alternative.
“In this article we have listed a number of features that positively separates Clojure from the rest.” well, seems like the author thinks they address scala as well as java in the article, even though it’s only named in the previous sentence.
Spec is no alternative to static typing, as far as I know. Isn’t it just runtime checks, and possibly a test helper? Scala and kotlin both have decent (or even advanced) type systems. I think some of the points are also advantages over kotlin and scala (repl and simplicity/stability, respectively), but the choice is not as black and white as depicted in OP.
Spec isn’t static, but it can provide much of the same functionality as a type system like Scala’s, just that it does so at runtime rather than compile time. In addition, it can be used to implement design-by-contract and dependent types and it can also generate samples of the specified data for use in testing. It’s not the same but it is an alternative.
Yeah #8 and #9 (and arguably #7) really are just downsides that are framed as if they were upsides. “There’s no static typing, but here’s what you can do instead!” and “The startup time is really slow; here are some ways to work around that problem!”
I read #9 as ‘by default, clojure is compiled, like java. But here’s a way to get instant(-ish) startup time, which is impossible with java.’
clojure is compiled, like java
That is, clojure’s startup time characteristics are similar to java’s.
uh … no?
Clojure code can’t run without loading the Clojure runtime, which is implemented as an enormous pile of Java bytecode. A “hello world” in Java only has to load a single bytecode file, whereas a comparable Clojure program will have to almost all of clojure.jar before it can even begin to do anything.
That sounds like someone who’s a bit too enthusiastic to acknowledge the shortcomings that any tool has.
Immediately after reading the article, I agreed with this comment. After re-reading it more critically, I think the issue isn’t that he is too enthusiastic, as much as that isn’t the point of this. Outside of the brief jab in the second to last sentence (“that positively separates Clojure from the rest”), to me this doesn’t read as a comparison between all the potential successors, just something aimed at getting people interested in trying Clojure.
As someone who hasn’t written Clojure before, but really enjoys Lisp-based languages, I found it to be a helpful overview of the language. The fact that there are no negatives listed doesn’t deter me from believing the positives any more than if I was looking for a new computer and the specifications didn’t list out other companies that make a better product. It just makes me want to carry out his last sentence and see for myself:
… the best way to learn Clojure is to actually use it.
As a fan of static typing, I would not advertise Java’s implementation of it as a feature. More of an anti-feature. It doesn’t track whether or not a field or variable pointing at a reference type can be null or not. It doesn’t support algebraic data types.
For contrast, I would advertise the static type systems in Kotlin and Scala as features.
Explicit nullability tracking is not a small deal. Never having to accidentally cause a NullPointerException again is liberating.
ADTs mean you can implement everything in the “make wrong states unrepresentable” paper.
To avoid using k8s, they had to develop their own deployment UI and their own AWS orchestration tool.
While I agree with the overall article, it seems like they are suffering from the “Not invented here” syndrome. It might be that their tool is easier to maintain than a k8s cluster, but I’m sure they still had to invest a significant amount of manpower to develop their system.
Every time I see this article, I causes me to go look up and read the one below, and it has yet to fail to improve my day
Not a summary, but a quote from the text itself that happens to be somewhat free of rhetorics :
You might ask, “Why would someone write code in a grotesque language that exposes raw memory addresses? Why not use a modern language with garbage collection and functional programming and free massages after lunch?” Here’s the answer: Pointers are real. They’re what the hardware understands. Somebody has to deal with them. You can’t just place a LISP book on top of an x86 chip and hope that the hardware learns about lambda calculus by osmosis.
I think it serves as a good abstract for the whole text.
Starting with pointers are real, one then builds abstractions on top of them to ensure they’re either replaced when efficient (eg arrays vs pointers) or used correctly ( eg aliasing). There’s system languages that do that in a variety of ways. Then, one might integrate it with SMT and separation solvers and/or use static analyzers to eliminate these consistent errors.
The above combines the realities of hardware with modern advances in language design. Sticking with C’s limitations anyway becomes harder to justify.
I guess the point of the article is that when your “separation solvers” fail (because they will), somebody has to be able to pick the broken pieces and rebuild civilization in an afternoon. Mankind cannot forget pointers forever, just by using abstractions.
Notice that this is the same in other sciences, unrelated to computers. Even if “high-level” properties of chemical substances are a good abstraction, cell biologists often need to dig deeper and dirty their hands with the fact that molecules are made of atoms. No science has perfect abstractions, and it would be ludicrous to pretend that programming can.
Mankind cannot forget pointers forever, just by using abstractions.
But, maybe one day we can move passed a virtual machine for the PDP-11?
(I happen to like C, but it’s hard to debate against the minimal amount of C necessary, being the correct amount of C, in these modern times)
But then, in which language do we design modern languages ? Is there absolutely no room for bootstrapping needs in the whole thing ?
Also, yes! Driver in high level languages in a no-frills system are a thing ! https://www.netbsd.org/~lneto/bsdconbr15.pdf
Meanwhile, I enjoy C for exposing me to the raw insides of a system, and I like the (false?) impression it gives. Wow! Is it C for prototyping and “HighLevel” (evoluted) language for real implementation now? That moved fast!
I wanted to wait till I was home and had time to respond. I definitely have links you might enjoy.
“But then, in which language do we design modern languages ? Is there absolutely no room for bootstrapping needs in the whole thing ?”
If aiming for practicality, I’d say anything that handles strings/trees well, prevents/catches errors well, has metaprogramming, and can be parallelized. These would make build a robust compiler more easy. Standard ML was popular for doing it correctly. Lisp for doing it powerfully. FLINT in SML and Nanopass framework in Scheme are examples.
Maybe you want to bootstrap for simple dependencies and/or diving down the rabbit hold. Others and I have a pile of links on that. Lisp’s help you cheat at it. Tcl has a simple syntax to build on, too.
“Driver in high level languages in a no-frills system are a thing !”
Or in Haskell. A Haskeller told me that’s not typical Haskell by any means. It did take us from “you need C for device drivers” to “you can use a different approach to Haskell.” Lea Wittie also used a type-safe, C alternative for drivers and more interesting work like Laddie here. There’s also people synthesizing drivers from specifications.
“I enjoy C for exposing me to the raw insides of a system”
First, it’s definitely OK to use a language if you’re just doing it for personal enjoyment. Far as C, it kind of gives you both the insides of a system and decades worth of C-related stuff. There’s been attempts that further simplifying it like C–.
Thank you for your detailed and insightful answer, which build on my discovery of “tools to make a hardware actually run software”.
I had vague remembrance of C–. It is good to see that from a given state, it is possible to keep going in all directions including toward simplification.
I guess the most fierce C zealots themselves have bumped onto this language shortcomings, for a language for which the latest revisions are still backward-compatible with the earliest K&R forms.
C sounds much like today landscape, and landscapes evolve. Few C compiler left to compile only C.
With more functional languages, I guess it takes a bit of programming language theory to get static type checking done right, which might be wanted to have it compiled without run-time type checking, maybe useful for low-level works.
P.S.: https://bootstrapping.miraheze.org/wiki/Main_Page is definitely a gold mine, which will not help me with insomnia. :)
At least for me, the zany style is the insight, trying to juxtapose the ridiculousness of low-level C bugs with this meta-narrative about the apocalypse
See the comment from @coco
oh, there is a shrine for HIM here: https://medium.com/@soobrosa/my-humble-james-mickens-shrine-a-k-a-the-only-real-combined-cs-degree-and-mba-you-will-ever-need-1f437f496d1c
Isn’t the metaphor Git uses related to a “golden master” as opposed to the opposite of “slave”?
Here’s where the term seems to have originated: https://mail.gnome.org/archives/desktop-devel-list/2019-May/msg00066.html
That still doesn’t make it offensive terminology. Slavery has nothing to do with race outside of one particularly nasty instance of slavery in the US.
Maybe the etymology of the literal English word “slavery” is related to race, but that’s irrelevant to the point, which is that the concept the word slavery relates to has existed since time immemorial.
Blanking your comments harms discussions; please don’t.
Do NOT do this thing with editing your comments after the fact to be blank, ever again. You will be banned if there is even a single additional offense. It is anti-social behavior and has no place here.
Interesting, good to know!
Just a natural extension of finding opinions violent if those opinions, carried to their logical extreme as public policy, would lead to death.
If words can be violent by existing, then some words, no matter their origin, use, or typical purpose, must not exist.
It’s exactly the same stupid logic as the RuboCop issue and the BBC taking Fawlty Towers off iPlayer because of the use of racial slurs. It’s a complete lack of understanding of context and nuance. Words have meanings only in context. Don’t let them know that Buddhists put swastikas on their temples, they might blow a fuse.
I thought I managed to explain exactly that in such a way that anybody would be able to hear it. Was I wrong?
You were. I’ve read through your comment twice and I still don’t understand what you were trying to convey.
You make a leap from ‘opinions’ to ‘words’. I think there is a vast difference between those categories. There are certainly opinions whose utterance can be considered an act of violence (“All Frenchmen should be killed”), except in some specific circumstances (quoting, discussion, ridicule, irony, …). On the other hand, the utterance of a single word cannot be considered an act of violence, again excepting certain specific circumstances (someone giving the order ‘Kill!’, someone effecting a known response: ‘Witch!’).
I find it hard to think of circumstances in which the word ‘master’ would cause violence. Perhaps psychological violence as a ‘trigger’ word, but I would be really interested in seeing some data on the (preferably pre/early-internet, but any at all would be good) ubiquity of that.
I feel as though we’ve taken a step in the wrong direction if speaking is violence.
I think most countries laws’ recognize certain illegal speech acts, such as inciting violence. The hirer of a hitman has only engaged in speech acts. Will no one rid me of this turbulent priest? Are such speech acts then not acts of violence themselves?
I think that it’s very hard for words to be violent, although others have different lines. I think this warrants greater discussion before we accept that words can be violent inherently. This is because I am against violence and if words are violent then by nature, I think they would need to be minimized.
I consider words violent if they incite someone to commit violence. This can be hard to determine so I like to use US law as a basis as it criminalizes violent speech.
Yep, presumably ultimately derived from the process of audio master and a master cutting lathe, to which other lathes were slaved.