I’d lean towards Maybe on this one, although this is arguably a No. Functional languages are still quite niche, but functional programming ideas are now mainstream, at least for the HN/reddit/twitter crowd.
Also Java, so I’d call it a yes.
Model checking is omnipresent in chip design. Microsoft’s driver verification tool has probably had more impact than all chip design tools combined
I don’t know where the author is coming from here. It’s hard to imagine we would achieve the amazing hardware we have today without tools to harness complexity. What does he mean?
Sure, tools in general, but probably not for formal methods, which is what’s being discussed in that section. Sorry if that was unclear :).
I mostly agree with this, but have two nitpicks.
ARM is a RISC architecture, so I’d actually move that one to “yes”. I’m not sure I buy the excuse it’s “not RISC-y enough”. At the very least it deserves to still be a “maybe”.
Second, I’d leave “formal methods” as a “no”. My understanding of formal methods is that they’re used at a higher level than type checking and static analysis: proving an algorithm will terminate, proving it won’t allocate memory, etc. If static analysis and type checking count as formal methods, I think it would have been a “maybe” back in 1999 and a “yes” now.
This is what the article says regarding ARM:
If there’s any threat to x86, it’s ARM, and it’s their business model that’s a threat, not their ISA. And as for their ISA, ARM’s biggest inroads into mobile and personal computing came with ARMv7 and earlier ISAs, which aren’t really more RISC-like than x86. In the area in which they dominated, their “modern” RISC-y ISA, ARMv8, is hopeless and will continue to be hopeless for years, and they’ll continue to dominate with their non-RISC ISAs.
For me it’s as well a surprise that “ARMv7 and earlier ISAs” are not real RISCs.
Fancy type systems
Well, if anything, Rust has been making progress into mainstream and it certainly has a fancy type system. Also, I was paid to work with languages with fancy type systems, so personally I have to categorize it as a “yes”.
means something at least as fancy as Scala or Haskell, this is a No.
The number of job requests I get for Scala is quite high and I know many high-profile companies using it.
The Scala space also has multiple large enterprise events and has a quite high ranking on some indexes (yes, TIOBE is flawed, but it is also an indicator). Thoughtworks lists it as “Adopt”.
Having worked at ThoughtWorks and contributed to the tech radar, I can safely say it’s also quite flawed.
Scala has adoption— that much is undeniable. But it hasn’t existed long enough to prove itself in the scope of decades.
It is flawed, but people listen to it. (I do, for example, still hate that it lists Rust only in the comparison to Go)
Scala and similar systems have adoption, but the post still lists “Fancy type systems” as “no”, and not “maybe”, which would more fit the “prove in the scope of decades” story.
It’s odd to me that the author lists both distributed systems and RPC as successes and then software engineering as a failure.
With unit tests, TDD/BDD, static code analysis, and various other things, I think that software engineering–for those who choose to practice it–is more of a success than ever.
The RPC/distributed systems ideas aren’t exactly successes, I’d say. They’re kinda of anti-patterns adopted widely in an industry driven by hype and marketing. There is no reason that, for >80% of workloads, it’s actually more sensible to spin up some massive container fleet or scatter things through lots of data centers and clouds…it’s simply that people have gotten lazy and have latched onto very good marketing by startups whose business model is predicated on convincing engineers to chase the new shiny and engineer complexity.
As someone working extensively with Mesos and its ecosystem, I am constantly running into people who want to do complex SOA topologies waaaaay before they have a legitimate reason to pursue it. Nobody is familiar with the universal law of scalability. Nobody has heard of queue theory. Nobody takes long-tail latency into consideration. So many services should have been libraries first, and only broken out into a process across the network when the application has an opportunity to parallelize more work by scattering some of it across some other services. RPC sucks, and should be avoided unless you have measured that you can come out ahead after your latency histogram stretches out a bit.