1. 8

    I don’t get this hype about seL4. All I see are it’s claims about it’s security and speed, but I can’t find anything about it’s usability. The communication on it’s page aggressively attacks other operating systems (e.g. “If someone else tells you they can make such guarantees, ask them to make them in public so Gernot can call out their bullshit” in the FAQ). The performance page doesn’t have any comparisons to other OS’s, yet FAQ claims that it is the fastest in the metric presented there. In general, the few times I’ve seen somebody bring up seL4, the proponents were very aggressive against other OS’s. Doesn’t really look well, does it?

    1. 17

      The rhetoric from the seL4 cheerleaders can indeed be cringeworthy at times. That being said, the L4 family is an interesting look into how you can start with a really minimal set of OS features and get to something useful, and seL4 is one of a very few OS kernels to be subject to rigorous formal verification. How much you value that probably tracks very closely to how much you value formal verification in general.

      It isn’t particularly useful to compare seL4 to a general-purpose OS like Linux or Windows since they’re intended for such different use cases. seL4 might be a useful building block for, say, a hardened security appliance that handles signing keys or runtime monitoring on behalf of some other general-purpose OS, or a high-value industrial control system (power plants, medical devices, voting machines, etc.)

      The focus on performance is AFAICT aimed mainly at the historical critique of microkernels as painfully slow for real-world workloads. That in turn largely stems from the behavior of poorly-optimized Mach-backed syscalls on commodity PCs when they were being put up against monolithic designs back in the 90s. (Mac OS still seems to carry some of this debt, as Xnu is a pretty direct descendant of Mach.)

      1. 3

        Is there a blog post about this? I want to know more!

        1. 3

          It’s not just Mach, it was also Windows NT, Minix, and others. It took the L3 and L4 family of kernels a long time to get this nailed down. Just dig around wikipedia for Microkernels and this paper for the history of L4.

      2. 3

        It outperformed other microkernels. It would probably host an OS like Linux in a VM alongside bare-metal or runtume-supported components. A secure middleware lets the pieces communicate. The architecture is often called Multiple Independent Levels of Security with microkernels doing it called “separation” kernels. Overall performance depends on the overheads of context switching and message passing which lead to tradeoffs in how many pieces you break system into.

        This pdf on a similar system (Nizza) shows how building blocks like seL4 were to be used. INTEGRITY-178B was the first built, certified, and deployed in Dell Optiplex’s. The certification data at bottom-right shows what was required but watch out for their marketing ;). LynxSecure is used by Navy. Due to funding, complexity, and OK Labs focus on mobile, the seL4 team switched focus to embedded like military and IoT.

        @Shapr, tagging you in since the Nizza paper might help you out.

        1. 1

          I thought version of L4 hosting Linux outperform Linux?

          1. 2

            It did. The benchmark might be meaningless, though. A real system would extract more and more of the Linux TCB into isolated partitions. There would be more message passing. It could also cause more accidental cache flushes on top of clearing registers and caches that already happens in separation kernels upon security context switch. We don’t know what the performance hit would be.

            An example would be a web server where the kernel, ethernet, networking, filesystem, firewall, TLS, and user-facing server are all in separate partitions. Things that are mostly jumps in memory of one process become IPC across them. That could add up.

      1. 5

        “With an open-source implementation, you see what you get”

        Just wanted to note this is not true at all for hardware. The synthesis tools, usually two in combination, convert the high-level form into low-level pieces that actually run. They’re kind of like Legos for logic. Like with a compiler, they might transform them a lot to optimize. They use standard cells that are usually secret. Then, there’s analog and RF functionality that might have errors or subversions with fewer experts that know anything about it. Finally, there’s the supply chain from masks to fab to packaging to you.

        With hardware, you have no idea what you actually got unless you tear it down. If it’s deep sub-micron, you have to one or more other companies during the tear-down process. This excludes the possibility that they can make components look like other components in a tear-down. Idk if that’s possible but figure I should mention it.

        When I looked at that problem, my solution was that the core or at least a checker/monitor had to be at 350nm or above so a random sample could be torn up for visual inspection. The core would be designed like VAMP with strong verification. Then, synthesis (eg Baranov’s) to lower-level form with verified transforms followed by equivalence checks (formal and/or testing). The cells, analog, and RF would be verified by mutually-suspicious experts. Then, there were some methods that can profile analog/RF effects of onboard hardware to tell if they swap it out at some point. Anyway, this is the start with open (or vetted + NDA) cells, analog, and RF showing up overtime, too. Some already are.

        1. 7
          1. 1

            I’m not a big fan of making critiques based on stuff that is explicitly outside of their security model. From my understanding, the formal verification of side channel for RISC-V would catch Spectre-style attacks: researchers implemented Spectre-like vulnerabilities into RISC-V designs which still conformed to the specification.

            Yes, you can backdoor compilers, microcode, and hardware. But that’s not far from the generic critique of formal methods based on Godel’s incompleteness theorem. seL4 is the only operating system that makes it worth our time to finally start hardening the supply chain against those types of attacks.

            1. 1

              I normally agree. However, they were pushing seL4 on ARM as a secure solution. You cant secure things on ARM offerings currently on market. So, it’s a false claim. The honest one is it gives isolation except for hardware attacks and/or faults. For many, that immediately precludes using it. I’d rather them advertise honestly.

              A side effect is that it might increase demand in secure hardware.

          1. 4

            Can’t GenodeOS work as the userland for seL4?

            1. 5

              Genode is nice and all, but it is Affero GPL licensed. This is likely seen as a huge liability.

              1. 5

                Specifically because they want hardware/software businesses to pay for using it. So, they should probably think of that combo as seL4 plus a commercial product. Most won’t use it as you predicted.

                1. 4

                  Ooooh, now that is something I totally missed, thanks!

                2. 2

                  From their documentation 1:

                  Genode can be deployed on a variety of different kernels including most members of the L4 family (NOVA, seL4, Fiasco.OC, OKL4 v2.1, L4ka::Pistachio, L4/Fiasco). Furthermore, it can be used on top of the Linux kernel to attain rapid development-test cycles during development. Additionally, the framework is accompanied with a custom microkernel that has been specifically developed for Genode and thereby further reduces the complexity of the trusted computing base compared to other kernels.

                  1. 2

                    But seL4 is single core only, so it’s not much use outside of embedded or single-purpose equipment :(

                    1. 1

                      Uh, oh :/ this is something I didn’t realize :(

                      1. 1

                        They have an unverified implementation, which is roughly as secure as a normal operating system.

                  1. 8

                    I expect a technical article about the fate of superior choices, and I was disappointed to find the usual bull.

                    The likes of AmigaOS and BeOS advanced the state of the art. Inferior solutions such as Windows, MacOS and later OSX were the ones most adopted.

                    Now we have seL4, but it’s the same story; Technical superiority means squat. That’s the problem with software. Dumb people with decision power. Otherwise-smart people putting up with them. And we have a lot of that. It’s a wonder progress is ever made despite this fact.

                    1. 23

                      Maybe we have to look to economic and political factors to understand why Windows and Mac won. We shouldn’t retreat into our techie bubble and pretend those things don’t matter.

                      1. 4

                        Pretending they do matter is at the heart of the problem.

                        Imagine if, rather than going with the flow, we used our brains and did what was right, put effort where it matters.

                        There’s nothing sadder than seeing otherwise capable individuals wasting their lives by pursuing the wrong endeavors, just because they are popular.

                        If anything, what is severely lacking in society is the ability to take a step back and think, as opposed as following the flow. The few people capable are often the ones that end up making a difference.

                        1. 14

                          There’s nothing sadder than seeing otherwise capable individuals wasting their lives by pursuing the wrong endeavors, just because they are popular.

                          I suppose you would consider me guilty of that. I work at Microsoft, on the Windows accessibility team. The work I do benefits not only Microsoft’s bottom line (somewhat), but potentially millions of people. Would it be better if I implemented a screen reader for AROS, or Haiku, or some OS based on SEL4? I could design a beautiful new accessibility architecture, possibly technically superior to anything currently out there. (Then again, I’m probably not actually that brilliant.) But who would it help? That, to me, would be a waste of my time.

                          Of course, this is all ignoring the fact that I probably couldn’t get paid to work on one of those obscure OS’s anyway. It would have to be a volunteer side project, and some problems are just too big to solve on nights and weekends.

                          1. 5

                            The work I do benefits not only Microsoft’s bottom line (somewhat), but potentially millions of people.

                            Microsoft customers, perhaps. Windows is unfortunately not open source as per OSI. It doesn’t count as work for humanity.

                            Those who can do paid work on actually worthy projects are few and far between.

                            I am myself an AWS drone. I do whatever I want on my free time, and I do get paid well, accelerating me towards not having to work at all (so called FIRE), which will free me to do whatever I want, full time.

                            or some OS based on seL4?

                            By all means. You’d be proudly at the forefront of computing, advancing the state of the art.

                            1. 6

                              Windows is unfortunately not open source as per OSI. It doesn’t count as work for humanity.

                              Open source is fantastic but I wouldn’t ignore closed source software like that.

                              1. 2

                                I do not ignore it. I recognize it, but due to its closed nature (source or license), it is prevented from benefiting mankind as a whole.

                                1. 5

                                  The problem with that statement is that it isn’t, in any way, true. In fact, it’s downright hard to do any kind of creative thing without benefiting mankind as a whole.

                                  1. 1

                                    Please define what you mean by ‘benefiting mankind’.

                                    1. 1

                                      In this context, it was qualified “as a whole” and meant nothing more than not being restricted to Microsoft clients.

                                      Of course, give it enough time and if an idea has worth, it will be replicated.

                                      1. 1

                                        Of course, give it enough time and if an idea has worth, it will be replicated.

                                        Are you sure of this? I’m not.

                                        Since this statement is conditioned on “give it enough time”, as it stands, it is untestable.

                                2. 1

                                  Those who can do paid work on actually worthy projects are few and far between.

                                  By your definition of ‘worthy’ or theirs?

                                  Do you have a philosophical stance on https://en.m.wikipedia.org/wiki/Moral_relativism ?

                                  1. 1

                                    By your definition of ‘worthy’ or theirs?

                                    By theirs. Only a few fortunate people feel their job is worth doing, money aside. This impression is based on the views my network of acquaintances have on their jobs, and restricted to computer science graduates.

                                    Do you have a philosophical stance on https://en.m.wikipedia.org/wiki/Moral_relativism ?

                                    This is a dangerous topic I’ll respectfully decline to comment on.

                                    1. 3

                                      By theirs. Only a few fortunate people feel their job is worth doing, money aside. This impression is based on the views my network of acquaintances have on their jobs, and restricted to computer science graduates.

                                      Then it may surprise you to learn that I do believe my job is worth doing, money aside, even though I’m working for Microsoft on Windows. It’s true that my work on Windows accessibility is only available to Microsoft customers and their end-users (e.g. employees or students). But that’s still a lot of people that my work benefits.

                                      1. 1

                                        What can I say, but congrats for working on a job you feel worth doing.

                                      2. 2

                                        I think don’t I get how you or they are defining worth. Can you explain more deeply?

                                        Some example guesses based on people I know:

                                        • If someone meant they wouldn’t do their job if they weren’t paid for it, that would hardly be a surprise. :)

                                        • Or perhaps ‘worth’ is meant as a catch-all for job satisfaction?

                                        • If someone said their job is to make system X be more efficient, but finds this to ‘lack worth’, perhaps they would like to see more direct results?

                                        • If someone says their job is not ‘worth’ doing, perhaps they mean they hoped for better for themselves?

                                        • Perhaps someone prioritized pay or experience in the near term as a means to an end, meaning some broader notion of ‘worth’ was not factored in.

                                        • Impact aside, some jobs feel draining, demotivating, or worse.

                                        • Some jobs feel like backwaters that still exist for historical reasons but add little value to the organization or customers.

                                        1. 2

                                          If someone meant they wouldn’t do their job if they weren’t paid for it, that would hardly be a surprise. :)

                                          That one. And yes, I am not joking.

                                          I otherwise see working as a losing proposition, as no amount of pay is actually worth not doing whatever you want with your time, which is limited.

                                          1. 1

                                            I otherwise see working as a losing proposition, as no amount of pay is actually worth not doing whatever you want with your time, which is limited.

                                            I’m not sure how to parse the sentence above. With regards to “otherwise” in particular: Do you mean that work (without money) “is a losing proposition”? And/or do you mean “generally, across the board”… you should simply do what you want with your time? And/or something else?

                                            How do you respond to the following claim?… To the extent work helps you earn money to provide for your basic human needs and wants, it serves a purpose. This purpose gives it worth.

                                            I’m trying to dig in and understand if there is a deeper philosophy at work here that explains your point of view.

                                3. 12

                                  I agree that it kind of sucks, but it will always be humans using and developing software, and we cannot expect humans to be rational. We are social beings and we have feelings, and things like popularity matter, whether we like it or not.

                                  You’re right that blindly following the flow is what got us into this mess. But as technologists we need to understand the humans and politics behind these decisions so that we can create our own flows for the technically superior solutions.

                                  1. 1

                                    In context (e.g. day-to-day work, especially in systems regarding human safety), we do want to build better technical solutions because we want them to be more reliable, which means they fail less often and do less damage to people.

                                    Some of us also want better technical solutions because it makes these systems more flexible to adapting contexts, which (hopefully) means less money and time is spent rebuilding half-baked systems that is, let’s face it, not the kind of work that many of us are hoping for.

                                    Now, for a broader claim: narrowly ‘technically-superior’ solutions in the service of immoral aims are not something we should be striving for.

                              2. 8

                                I really doubt it’s just “dumb people with decision power”. It’s mostly the users.

                                There is a concept called “bike shedding” with an example: if you discuss, with a group of people, plans about building a nuclear power plant and plans for building a bike shed - people will discuss the bike shed a lot more. Because that is what everyone understands. This same concept applies to most everything. Take books. Most popular books are really “dumb”. Everyone can read understand those and they become popular.

                                I think same concept transfers to the software world. We have what we have, because this is what won the “so dumb, everyone can use it” race.

                                1. 2

                                  I think same concept transfers to the software world. We have what we have, because this is what won the “so dumb, everyone can use it” race.

                                  And it’s still based on misconceptions, unfortunately.

                                  For instance, it’s pretty well accepted that concepts of modularity make programming easier, not harder. Concepts such as abstraction (as in the abstraction of the implementation behind an interface), or isolation (user processes run sandboxed with the help of mechanisms such as pagetables).

                                  However, when it comes to microkernel, multiserver operating systems, people have trouble with the idea that they are actually more tractable, rather than less. They’ll defend monoliths, even when they’re Linux-tier clusterfucks with little in terms of internal structure.

                                  At times, it seems hopeless.

                                  1. 2

                                    Not every abstraction turns out to be helpful. Sometimes they just make it impossible to figure out what’s going on.

                                    1. 1

                                      Absolutely.

                                      But it’s hard to argue no structure (chaos) is better than structure.

                                      1. 2

                                        I’ll take code that only uses simple, known-good abstractions (eg structured control, lack of global state) but is otherwise chaotic (eg code duplication with small modifications etc) over code that applies the wrong abstractions any day.

                                        1. 2

                                          For chaotic, try and trace function calls within the Linux kernel.

                                          1. 3

                                            That’s exactly the sort of thing I’m talking about - messy, but tractable with static analysis tooling. It’s a hard slog, but you can clearly see how much of a slog there is within a couple of hours investigation.

                                            Compare that to my daily driver - large rails apps. Not only are static analysis tools unable to follow the control graph, but the use of templated strings to find method names means you can’t even use grep to identify a given symbol.

                                            Sometimes there’s no quicker way to figure out what, if anything, uses a given method than to read 100k lines of ruby source. There’s frequently no quicker way to figure out where a method call goes than running it in a debugger.

                                            1. 1

                                              As the kernel runs in supervisor mode, I’d really prefer if it was very clearly structured and the execution flows going through it were obvious and didn’t require running it on a debugger.

                                2. 5

                                  I met the developers of seL4. It’s a tool intended for a very specific set of use cases, mostly embedded systems and military tech. It’s not intended to be a replacement for Windows/Mac/Linux and is not at all the “same story”.

                                  1. 2

                                    That’s not what they originally advertized, though. Originally, it was one of many L4-centric projects that would be used as foundation for desktop, mobile, and embedded applications. Nizza, Perseus, Genode, OKL4A, INTEGRITY-178B, LynxSecure, VxWorks MIKS, etc are all examples which did desktops by putting a Linux VM on top of the kernel. The seL4 kernel had x86 happening to support that but with initial efforts focused on ARM.

                                    I guess they realized the difficulty, both technical and marketing, of doing a secure workstation for x86. Most verification funding was also going toward embedded, IoT, and military. A military company bought OK Labs. It looks like they pivoted for now to totally focus on those areas building their component architecture. They even changed their website to only talk about these things. The NICTA website talked about things in this comment.

                                    It’s probably good move given the software requirements are simpler. They’ll be more likely to succeed.

                                    1. 2

                                      Gernot has said that that verification of the multicore kernel is very costly and no individual client is willing to foot the bill. They lost governmental funding and (AFAICT) their primary funding sources are from defense. So yeah, they would like to expand beyond embedded controllers for the military (high assurance VMM for Amazon or something) but no one cares about security enough.

                                      1. 1

                                        I didn’t know that. I wonder if it means funding authorities dont care about security or dont care to fund that project. The seL4 kernel is a simplified kernel verified using ultra-slow, ultra-costly techniques.

                                        They might want to fund methods with higher productivity and/or applicibility to existing systems. Most of the market still won’t buy whatever it is, though. A combo of developer, market, and defense apathy is why I’m doing far less research than before in security.

                                    2. 2

                                      This is what it is, currently. Doing whatever necessary to get embedded applications running is absolutely a much simpler scenario than a workstation one, and currently very realistic; They do have the examples to point to.

                                      But there’s nothing stopping it from going further. Genode’s sculpt manages to demonstrate this really well.

                                    3. 2

                                      Emotionally, I think I know where you are coming from.

                                      However, there isn’t a strong argument here. Some problems I see:

                                      • There is not just one problem with software.
                                      • You don’t explain what you mean by ‘dumb’ — it comes across as an amorphous insult
                                      • There are many kinds of intelligence
                                      1. 2

                                        Dumb was an unfortunate choice of words. Technically illiterate would have perhaps worked, but there’s a component of closed-mindedness or unwillingness to consider alternatives.

                                        The background of the post is having seen people who are otherwise intelligent and capable put up with terrible decisions from above and derived unhappiness. It is often better to stand for your beliefs (as in, if actually sure), and oppose these decisions. Doing so allows for “I told you.”. Should the company’s climate doesn’t allow this much, I’d suggest finding another job.

                                      2. 2

                                        The likes of AmigaOS and BeOS advanced the state of the art. Inferior solutions such as Windows, MacOS and later OSX were the ones most adopted.

                                        Technical superiority has nothing to do with the success of a platform. User experience is the ultimate arbiter in this case. MacOS has better UX than most operating systems. Windows has better UX than Linux or seL4 for a p50 user (example: my mother). People are not dumb to choose Windows or MacOS over Linux / seL4, they simply go for the better UX. If you want to create a superior platform, it has to start with superior UX, everything else is secondary.

                                        1. 3

                                          Windows was always in the bottom league when it came to UX, it became a winner because it had guaranteed backwards compatibility with an even worse system: MS-DOS.

                                          1. 2

                                            And monopoly tactics. Similar story for IBM vs better-designed mainframes such as B5000.

                                          2. 1

                                            Technical superiority has nothing to do with the success of a platform.

                                            Do you mean ‘is less important’ rather ‘has nothing to do with’?

                                            If you really mean ‘has nothing to do with’ you have the burden of proving a negative.

                                            A negative claim is a colloquialism for an affirmative claim that asserts the non-existence or exclusion of something.[10] The difference with a positive claim is that it takes only a single example to demonstrate such a positive assertion (“there is a chair in this room,” requires pointing to a single chair), while the inability to give examples demonstrates that the speaker has not yet found or noticed examples rather than demonstrates that no examples exist - Wikipedia: https://en.m.wikipedia.org/wiki/Burden_of_proof_(philosophy)#Proving_a_negative

                                            1. 2

                                              Ok I bite.

                                              • VMS technically superior to Unix -> Unix won
                                              • OS2 technically superior to DOS -> DOS won

                                              It appears to me that technical superiority does not have anything to do with how successful a platform is. You can prove me otherwise.

                                              1. 3

                                                I’d argue compatibility is the primary driver of success.

                                                I can run windows 95 games on windows 10; I can open an excel document from 1995 today. Getting PHP or java code from 20 years ago to run is typically no big deal, and that’s a large part of their dominance in their respective niches.

                                                1. 1

                                                  The way you are making the claim is oversimplified.

                                                  You’ve also shifted your language from ‘success of a platform’ to a notion of ‘winning’. But it raises the question ‘over what timeframe’? These platforms are not static, either.

                                                  For example, a big reason that Windows has remained a force (relative to competitors) is that it has improved its underpinnings over time.

                                                  Proving a negative is often waste of time unless you are working with precise definitions and deductive reasoning.

                                                  Let me suggest your time would be better spent by clarifying what you mean rather than making absolute statements.

                                                  Or maybe you want to write a thesis showing every software platform and demonstrating that in every case, technical aspects played no role in their evolution and success across various time scales? If so, go for it. :P (Be careful not to cherry pick the time scale to suit your argument. Or leave out examples that don’t fit.)

                                                  I’m trying to explain why oversimplified forms of argument are not very useful to me. My goal is to understand how these factors relate not only in the past but also in the future.

                                                  Your version of your argument in your head may be useful to you in some sense, but the way you’re stating it is way too blunt. I think by adding some nuance, your mental model of the situation will improve. I intend this to be taken in the spirit of constructive criticism.

                                          1. 1

                                            It’s a 2018 Arm A76, fabbed at 7nm and targeting 2Ghz. Sounds like we will get concrete answers on how CHERI’s memory model impacts performance.

                                            1. 1

                                              Basically guards for the DOM. Also sounds a lot like the SafeStrings proposal I posted earlier.

                                              When are we going to get a language with gradual dependent typing, runtime checks, and OCaps? :P

                                              1. 2

                                                The paper claims that there is an implementation somewhere but doesn’t link to it. I’ve requested access.

                                                1. 1

                                                  Yes! I wanted more work in this area. That and solvers are only way to make it practical. I figure it’s going to be a mix of experts laying supporting theories (eg on arrays, floats) with automation connecting them. After it starts working, we can ask folks not get funded unless they use the solvers and test generators to speed up the work.

                                                  1. 2

                                                    Indeed! I wish the authors had included analysis on how much this tool could boost overall productivity. It’s hard to judge how useful this would be as a guided proof assistant.

                                                    72 of the 79 generated proofs were <= 10 proof steps, which is impressive given that each proof consists of ~100 commands! However, the commands within the 10-step-proofs consisted of only ~20% of the ~12,000 total proof commands. Why not break the longer (and ostensibly more sophisticated) proofs into shorter ones (as CoqGym did)?

                                                    That being said, CoqGym and PB9K1 reaching 50-200% of CoqHammer’s solve rate is impressive: CoqHammer’s backends have had years commercial development (preceded by decades of research). The authors are probably busy improving their algorithms and planning on more sophisticated analysis later. Hopefully we’ll see more qualitative discussion in the future.

                                                    1. 1

                                                      Well, your comment tells me there’s at least straight-forward avenues for improvement. Breaking things up. They should try to merge those ideas first just to get better results. That gets more funding and bides time for them coming up with the real leap a year or two later. ;) Also, I’d rather them apply it to more proof efforts to generalize it. The nature of compilers might make them easier to do this on than say OS kernels or data structures mutating graphs.

                                                      One problem that may kick in is that I hear tools like Coq change in a way where old proofs don’t work on newer versions. If true, that might make building a training set of exemplar proofs more difficult. If no standardization, might need converters that transform older ones to new representation. I don’t know if that’s going to be just a transpiler or would require understanding the logic to point that it’s improbable/impossible.

                                                  1. 7

                                                    Big fan of work like this. Good article. I’ll note a good and bad point.

                                                    The good point is forcing user-space processes to donate their own resources to kernel calls. I’ve been promoting that wherever possible ever since I saw INTEGRITY-178B do it. Although unsure if they did, one might also want to apply that to partitioned subsystems for networking, filesystems, and graphics. Then, malicious code usually only DDOS’s or leaks within its partition thanks to fewer shared resources.

                                                    The bad point is the security vs performance claim. We default on security and performance being inverse for a reason. In many situations, you get better performance through resource sharing, increased coupling, and/or removing checks. Each of these can create security vulnerabilities. Additionally, layered systems operating on potentially-malicious inputs might have several levels of somewhat-redundant checks covering for how data safe early on might become malicious in the middle. Finally, monitoring cost something, too.

                                                    1. 7

                                                      The bad point is the security vs performance claim. We default on security and performance being inverse for a reason. In many situations, you get better performance through resource sharing, increased coupling, and/or removing checks. Each of these can create security vulnerabilities. Additionally, layered systems operating on potentially-malicious inputs might have several levels of somewhat-redundant checks covering for how data safe early on might become malicious in the middle. Finally, monitoring cost something, too.

                                                      Their point is that you won’t see adoption without competitive performance. The first generation of microkernels dropped user space drivers because their IPC was too slow [1], so the L4 family [2] pay special attention to IPC overhead. seL4 still brings some critical operations into kernel space (timing drivers and scheduling) despite the verification overhead.

                                                      FWIW, the same holds true regarding usability and security: if you don’t nail usability, then users will circumvent the security protocols.

                                                    1. 5

                                                      Our recent work on [side channel] protection indicates that we can solve this problem in seL4, by temporally or spatially partitioning all shared hardware resources, we partition even the kernel. This assumes that the hardware manufacturers get their act together and provide mechanisms for resetting all time-shared resources (and I’m working on the RISC-V Foundation to ensure that RISC-V does).

                                                      This is why we need Free and Open Source systems, so that we can solve problems collectively. Closed shops get away with it because they hide their terrible code in opaque binary blobs. Those sins are usually the result of short sighted management and FOSS has a way of forcing companies to do better. From Dave Airlie’s rejection of the initial AMDGPU driver (which had an HAL):

                                                      There have been countless requests from various companies and contributors to merge unsavoury things over the years and we’ve denied them. They’ve all had the same reasons behind why they couldn’t do what we want and why we were wrong, but lots of people have shown up who do get what we are at and have joined the community and contributed drivers that conform to the standards.

                                                      Here’s the thing, we want AMD to join the graphics community not hang out inside the company in silos. We need to enable FreeSync on Linux, go ask the community how would be best to do it, don’t shove it inside the driver hidden in a special ioctl. Got some new HDMI features that are secret, talk to other ppl in the same position and work out a plan for moving forward.

                                                      1. 9

                                                        This unfortunately doesn’t cover current Thunderbolt 3 (and maybe soon to be USB 4.0) cables. They use the same USB-C connector but have their own range of capabilities and add their own confusion in the mix by not clearly identifying which cables support what.

                                                        For Thunderbolt on MacOS, you get a “Cannot Use Thunderbolt Accessory” notification when you plug the device in if it’s not working properly but there’s no additional information on why it’s not working or any indication whether it’s due to a cabling issue or other hardware failure.

                                                        1. 5

                                                          The whole situation is a total catastrophe.

                                                          1. 2

                                                            The way I see it, there are two dimensions: connectors and capabilities. If we want to support each capability in each connector (both ends of the wire), we’ll need to support the whole 2-D space, obviously. Perhaps the naming could be improved, but I really don’t see any problems with the number of combinations out there. Unless people are okay with losing compatibility (physical/software), which everyone is okay with unless things break for them.

                                                          2. 1

                                                            Maybe USB 4 will help clarify the issue: force all USB-4 cables to be Type C and limit the varieties to those with or without power delivery. Then you just have to make sure it’s a USB-4 cable, no more googling for "USB 3.1" "gen 2" "5 Amp"|"5A".

                                                          1. 5

                                                            What I appreciate most of all is that nobody apparently thought about how to design USB-C plugs so they didn’t slide out.

                                                            1. 3

                                                              Sweet baby Jesus why would you want that? Personally I’m annoyed at how difficult it is to pull out a USB-C compared to the (now) old-fashioned MagSafe. When it’s finally time to replace the wife’s old laptop with whatever’s current at the time, I fear for its life.

                                                              1. 2

                                                                I loved MagSafe connectors and thought them up years before Apple introduced them: magnets get rid of mechanical wear-and-tear while making it easier to plug in! I’m guessing they weren’t used in USB-3 because of the connector size and magnetic interference.

                                                              2. 2

                                                                It seems to me that they did. My regular phone charger slides out way too easily, but my laptop charger (when either plugged into my laptop, or my phone) is quite good at staying in until I try and pull it out.

                                                                1. 3

                                                                  Have you checked your phone’s usb-c port and mauybe tried cleaning it with a toothpick? :)

                                                                  Not sure this is intentional, but with my Nexus 5X the lint seems to set in such a way that the usb-c cable slides out with the slightest touch once enough has accumulated. The connection is never broken so you’d notice, just the mechanical “lock”.

                                                                2. 2

                                                                  I’ve personally always experienced that issue more often with micro-A than I have with any of my devices with C.

                                                                  1. 1

                                                                    I’ve been so annoyed by this that I’m pondering whether USB-C cables can be used for electronics which don’t get a gentle treatment all of the time (badges, smaller electronic cards, …).

                                                                  1. 6

                                                                    This describes the different compliant varieties but to make things yet more complicated it sounded like for some time there were a lot of manufacturers who were producing incorrectly-terminated cables. There was Benson Leung naming-and-shaming them for a while but I don’t know if that kind of scrutiny is necessary anymore. http://bensonapproved.com redirects and I can’t seem to access that site anymore.

                                                                    1. 6

                                                                      For the record Benson Leung is the author of this very post.

                                                                      1. 2

                                                                        I bought a cord he approved, it was a PoS. I think they got a bump from his endorsement and then cut quality to reap the profits. He’s only one person and he can’t continually test cables at his own expense, the USB licensors really need to implement some sort of QA process.

                                                                      1. 8

                                                                        I’ve said it before and I’ll say it again: ZFS should be the default on all Linux distros. It’s in a league of its own, and makes all other existing Linux filesystems irrelevant, bizarre licensing issues be damned.

                                                                        1. 7

                                                                          I use ZFS and love it. But I disagree that ZFS should be the default as-is. It requires a fair bit of tuning. For non-server workloads, the ARC in particular. ZFS does not use Linux’ buffer cache and while ARC size adapts, I have often seen on lower memory machines that the ARC takes too much memory at a given point, leaving too little memory for the OS and applications. So, most users would want to tune zfs_arc_max for their particular workload.

                                                                          I do think ZFS should be available as an option in all Linux distributions. It is simply better than the filesystems that are currently provided in the kernel. (Maybe bcachefs will be a competent alternative in the future.)

                                                                          1. 2

                                                                            I agree.

                                                                            I remember installing FreeBSD 11 once (with root on ZFS) because I needed a machine remotely accessible via SSH to handle files on an existing disk with ZFS.

                                                                            No shizzle, FreeBSD defaults, the machine had 16G of RAM, and during an hours long scp run, ARC decided to eat up all the memory, triggering the kernel into killing processes… including SSH.

                                                                            So I lost access, had to restart scp again (no resume, remember), etc. This is a huge show stopper and it should never happen.

                                                                            1. 1

                                                                              That seems like a bug that should be fixed. Don’t see any reason why that should prevent it from being the default though.

                                                                            2. 1

                                                                              That’s definitely something to consider, however, Apple has made APFS (ZFS inspired) the default on macOS, so there’s got to be a way to make it work for ZFS + Linux Desktop too. ZFS is all about making things work without you having to give it much thought. Desktop distros can pick reasonable defaults for desktop use, and ZFS could possibly make the parameter smarter somehow.

                                                                            3. 2

                                                                              I think the licensing issue is the primary problem for Linux distros.

                                                                              1. 1

                                                                                I agree on technical superiority. What about the Oracle threat given its owner pulled off that API trick? Should we take the risk of all owing Oracle’s lawyers money in some future case? Or rush to implement something different that they don’t control with most of its strengths? I think the latter makes the most sense in the long-term.

                                                                                1. 3

                                                                                  Oracle is not a problem, as the ZFS license is not being violated – it is the Linux license.

                                                                                  1. 1

                                                                                    “Oracle is not a problem, as the ZFS license is not being violated”

                                                                                    That’s a big claim to make in the event large sums of money are ever involved. Oracle threw massive amounts of lawyers at Google with the result being API’s were suddenly a thing they could copyright. Nobody knew that before. With enough money and malicious intent, it became a thing that could affect FOSS developers or anyone building on proprietary platforms. What will they do next?

                                                                                    I don’t know. Given they’re malicious, the safest thing is to not use anything they own or might have patents on. Just stay as far away from every sue-happy party in patent and copyright spaces. Oracle is a big one that seeks big damages for its targets on top of trying to rewrite the law in cases. I steer clear of their stuff. We don’t even need it, either. It’s just more convenient than alternatives.

                                                                                    1. 8

                                                                                      The CDDL, an OSI-approved open source licensed, includes both a copyright and patent grant for all of the code released by Sun (now Oracle). Oracle have sued a lot of people for a lot of things, but they haven’t come after illumos or OpenZFS and there are definitely companies using both of those bodies of software to make real money.

                                                                                      1. 2

                                                                                        I think you’re missing the implications of they effectively rewrote the law in the case I referenced. If they can do that, it might not matter what their agreements say if it’s their property. The risk might be low enough that it never plays out. One just can’t ever know if they depend on legal provisions with a malicious party that tries to rewrite laws in its favor with lobbyists and lawyers.

                                                                                        And sometimes succeeds unlike basically everyone doing open source and free software. Those seem to barely enforce their agreements and/or be vulnerable to patent suits in case of the permissive licenses. Plus, could the defenders even afford a trial at the current rates?

                                                                                        I bet 10 years ago you wouldn’t have guessed a mobile supplier using an open-ish platform would be fighting to avoid giving over $8 billion dollars to an enterprise-focused, database company. Yet, untrustworthy dependencies let that happen. And we got lucky it was a rich company that depended on OSS/FOSS stuff defending. The rulings could’ve been worse for us if it wasn’t Google.

                                                                                        1. 6

                                                                                          Seeing as Sun gave ZFS away before Oracle bought it, Oracle would have a LOT of legal wackiness to get the CDDL license revoked somehow. But for the safe of argument, let’s assume they do manage somehow to make it invalidated, and went nuts and decided to try and charge everyone currently using ZFS pay bajillions of dollars for “their” tech. Laws would have to change significantly for that to happen, and with such a significant change in current law, there is basically zero chance it would be retro-active from the moment you started using ZFS, so worst case you’d have to pay from the time of the law change. That is if you didn’t just move off of ZFS after the law changed and be out zero dollars.

                                                                                          Also, the OSS version of ZFS is significantly different from Oracle’s version that they are sort of kissing cousins at best anymore. ZFS has been CDDL licensed since 2005, so a long history of divergence from the Oracle version. I think Oracle would have a VERY hard time getting the OSS version back under the Oracle banner(s). Even with very hypothetical significant law changes.

                                                                                          I’m in favour of things competing against ZFS, but currently nothing really does.. BTRFS tries, but their stability record is pretty miserable for anything besides the simplest workloads. ZFS has had wide production usage since 2001. Maybe in another 5 or 10 years we will have a decent stable competitor to some of ZFS’s feature-sets.

                                                                                          But regardless if you are a large company with something to lose, your lawyers will be the ones advising you about using ZFS or not, and Canonical’s lawyers clearly decided there was nothing to worry about, Along with Samsung(who own Joyent, the people behind Illumos). There are also many other large companies that have bet big on Oracle having basically zero legal leg to stand on.

                                                                                          Of course the other side of the coin is the ZFS <-> Linux marriage, but that’s easy just don’t run ZFS under Linux, or use the Canonical shipped version and let Canonical take all the legal heat.

                                                                                          1. 2

                                                                                            Best counterpoints so far. I’ll note this part might not be as strong as you think:

                                                                                            “and Canonical’s lawyers clearly decided there was nothing to worry about, Along with Samsung(who own Joyent, the people behind Illumos)”

                                                                                            The main way companies dodge suits is to have tons of money and patents themselves to make the process expensive as hell for anyone that tries. Linux companies almost got patent sued by Microsoft. IBM, a huge patent holder, stepped up saying they’d deal with anyone that threatened it. They claimed they were putting a billion dollars into Linux. Microsoft backed off. That GPL companies aren’t getting sued made Canonical’s lawyers comfortable but not an actual assurance. Samsung is another giant, patent holder with big lawyers. It takes an Apple-sized company to want to sue them.

                                                                                            So, big, patent holders or projects they protect are outliers. That might work to ZFS’s advantage here. Especially if IBM used it. They don’t prove what will happen with smaller companies, though.

                                                                                            1. 2

                                                                                              I agree with you in theory, but not in practice because of the CDDL (which ZFS is licensed under). This license explicitly grants a “patent peace” see: https://en.wikipedia.org/wiki/Common_Development_and_Distribution_License

                                                                                              I know most/many OSS licenses sort of wimp out on patents and ignore the problem, CDDL doesn’t. Perhaps it could have even stronger language, and there might be some wiggle room for some crazy lawyering.. I just don’t really see Oracle being THAT crazy. Oracle, being solely focused on $$$$, would have to see some serious money bags to go shake loose, I doubt they would ever bother going after anyone not the size of a Fortune 500, the money just isn’t there. Google has giant bags full of money they don’t even know what to do with, so Oracle trying to steal a few makes sense. :P

                                                                                              Oracle going after Google makes sense knowing Oracle, and it was , like you said, brand new lawyering, trying to create API’s out of Copyrights. Patents are not remotely new. So some lawyer for Oracle would have to dream up some new way to screw up laws to their advantage. Possible sure, but it would be possible for any other crazy lawyer to go nuts here (wholly unrelated to ZFS or even technology), it’s not an Oracle exclusive idiocy. Trying to avoid unknown lawyering that’s not even theoretical at this point would be sort of stupid I would think… but I’m not a lawyer.

                                                                                              1. 1

                                                                                                “I know most/many OSS licenses sort of wimp out on patents and ignore the problem, CDDL doesn’t.”

                                                                                                That would be re-assuring on patent part.

                                                                                                “Possible sure, but it would be possible for any other crazy lawyer to go nuts here (wholly unrelated to ZFS or even technology), it’s not an Oracle exclusive idiocy. Trying to avoid unknown lawyering”

                                                                                                Oracle was the only one to flip software copyright on its head like this. So, I don’t think it’s an any company thing. Either way, the threat I’m defending against isn’t unknown lawyering in general: it’s unknown lawyering of a malicious company whose private property I may or may not depend on. When you frame it that way, one might wonder why anyone would depend on a malicious company at all. Avoiding that is a good pattern in general. Then, the license negates some amount of that potential malice for a great product with unknown, residual risk.

                                                                                                I agree the residual risk probably won’t affect individuals, though. An Oracle-driven risk might affect small to mid-sized businesses depending on how it plays out. Good news is swapping filesystems isn’t very hard on Linux and BSD’s. ;)

                                                                                      2. 4

                                                                                        AFAIK, it’s the GPL that’s being violated. But I’m really tired and the SFC does mention something about Oracle suing so 🤷.

                                                                                        Suing based on the use of works derived from Oracle’s CDDL sources would be a step further than the dumb Google Java lawsuit because they haven’t gone after anyone for using OpenJDK-based derivatives of Java. Oracle’s lawsuit-happy nature would, however, mean that a reimplementation of ZFS would be a bigger target because it doesn’t have the CDDL patent grant. Of course, any file system that implements one of their dumb patents could be at risk….

                                                                                        I miss Sun!

                                                                                  2. 1

                                                                                    What does ZFS have that is so much better than btrfs?

                                                                                    I’m also not sure these types of filesystems are well suited for databases which implement their own transactions and COW, so I’m not sure I would go as far as saying they are all irrelevant.

                                                                                    1. 11

                                                                                      ZFS is extremely stable and battle-tested, while that’s not a reason in itself to make it a better filesystem, it makes it a extremely safe option when what you’re looking for is something stable to keep your data consistent.

                                                                                      It is also one of the most cross-platform file system. Linux, FreeBSD, MacOS, Windows Illumos. It has a huge amount of development behind it, and as of recently the community has come together significantly across the platforms. Being able to export your pool on FreeBSD and import it on Linux or another platform makes it a much better option if you want to avoid lock-in.

                                                                                      Additionally, the ARC

                                                                                      Problems with btrfs that make it not ready:

                                                                                      1. 0

                                                                                        If I don’t use/want to use RAID5 then I don’t see the problem with btrfs.

                                                                                        1. 3

                                                                                          I ran btrfs in production on my home server for ~3-4 years, IIRC. If you want to use btrfs as a better ext4, e.g. just for the compression and checksumming and maybe, maybe snapshotting, then you’re probably fine. If you want to do anything beyond that, I would not trust it with your data. Or at the very least, I wouldn’t trust it with your data that’s not backed up using something that has nothing to do with btrfs (i.e. is not btrfs snapshots and is not btrfs send/receive).

                                                                                          I had three distinct crashes/data corruption problems that damaged the filesystem badly enough that I had to back up and run mkfs.btrfs again. These were mostly caused by interruptions/power failures while I was making changes to the fs, for example removing a device or rebalancing or something. Honestly I’ve forgotten the exact details now, otherwise I’d say something less vague. But the bottom line is that it simply lacks polish. And mind you, this is from the filesystem that is supposed to be explicitly designed to resist this kind of corruption. I know at least the last case of corruption I had (which finally made me move to ZFS) was obviously preventable but that failure handling hadn’t been written yet and so the fs got into a state that the kernel didn’t know how to handle.

                                                                                      2. 3

                                                                                        well, I don’t know about better, but ZFS has the distinct disadvantage of being out of tree filesystem so it can and will break depending completely on the whims of kernel development. How anyone can call this stable and safe for production use is beyond me.

                                                                                        1. 3

                                                                                          I think the biggest argument is mature implementations used by large numbers of people. That catches lots of common and uncommon problems. In reliability-focused filesystems, that the reliability is field-proven then constantly maintained is more important to me than about anything. The only reason I don’t use it is that it came from Oracle with all the legal unknowns that can bring down the line.

                                                                                          1. 3

                                                                                            When you say “Oracle”, are you referring to ZFS or btrfs? ;)

                                                                                            1. 1

                                                                                              Oh shit! I didn’t know they designed both! Glad I wasn’t using btrfs either. Thanks for the tip haha.

                                                                                          2. 2

                                                                                            On a practical level, ZFS is a lot more tested (in Solaris/Illumos, FreeBSD, and now Linux); more different people have put more terabytes of data in and out of ZFS than they seem to have for btrfs. This matters because we seem to be unable to build filesystems that don’t run into corner cases sooner or later, so the more time and data a filesystem has handled, the more corner cases have been turned up and fixed.

                                                                                            On a theoretical level, my personal view is that ZFS picked a better internal structure for how its storage is organized and managed than btrfs did (unless btrfs drastically changed things since I last looked several years ago). To put it simply, ZFS is a volume manager first and then a filesystem manager second (on top of the volumes), while btrfs is (or was) the other way around (you manage filesystems and volumes are a magical side effect). ZFS’s model does more (obvious) violence to Linux IO layering than I think btrfs’s does, but I strongly believe it is the better one and gives you cleaner end results.

                                                                                          3. 0

                                                                                            Why would I want to run ZFS on my laptop?

                                                                                            1. 1

                                                                                              Why wouldn’t you want to run it on your laptop?

                                                                                          1. 5

                                                                                            A more accurate sub-heading would be that “no method” of verification caught Selfie, including formal verification. Cryptographer’s review, the secure coders, tests, formal verification… nothing that was applied caught it. Then, it’s not giving impression formal verification uniquely failed. It’s just elaborating on one of many failures.

                                                                                            “At first glance, Selfie does indeed seem to fly in the face the unprecedented effort to formally verify TLS 1.3 as secure.”

                                                                                            I prefer to be consistent. Teams like seL4 say the verification is that the implementation meets the spec, not absolutely secure. If we’re echoing this, we might say that some or many people thought TLS was secure due to formal verification. However, it was only verified against the spec that addressed attacks we understood. New attacks require new specs and tools with changes to the implementation. That simple.

                                                                                            1. 3

                                                                                              I prefer to be consistent. Teams like seL4 say the verification is that the implementation meets the spec, not absolutely secure. If we’re echoing this, we might say that some or many people thought TLS was secure due to formal verification. However, it was only verified against the spec that addressed attacks we understood. New attacks require new specs and tools with changes to the implementation. That simple.

                                                                                              The author devoted the last third of the article to explaining this.

                                                                                              1. 2

                                                                                                One of the top priorities on my to-do list is a blog post reframing formal verification in the high assurance toolbelt. Right underneath that is reframing capabilities as automatically generated, finely grained sandboxing.

                                                                                                1. 1

                                                                                                  That fits a lot of uses for capabilities. Developers can learn advanced stuff later on. Is there a brief summary of the reframing for formal verification? And do submit it here when done. :)

                                                                                              1. 1

                                                                                                Yikes, they want you to embed arbitrary JS code. I have designed a commenting system with ironclad security (it’s impossible to perform SQL injection, very difficult to compromise the parser, JS exploits require a browser zero-day) and and graceful degradation for no-JS users.

                                                                                                Guess I should get around to that.

                                                                                                1. 3

                                                                                                  From the top comment:

                                                                                                  If the kernel is going to impose its view of what a container is, the question becomes which container construction should it be? The obvious answer might be what docker/kubernetes does, but some of those practices (like no user namespace, pod shared ipc and net namespace) are somewhat incompatible with what LXC does and they’re definitely wholly incompatible with other less popular container use cases, like the architecture emulation containers I use to maintain cross arch builds of my projects.

                                                                                                  As someone who has worked on a popular OSS project in the past, it’s incredibly frustrating when someone doesn’t bother to see if such changes have been rejected in the past. Instead they just write a patch and act like jerks when it is rejected, “I wasn’t party to [the prior consensus against this change] and don’t feel particularly bound by it”.

                                                                                                  I know you’re upset that all your effort has been wasted, but maybe you should have spent 10 minutes on IRC first. Now core developers have to waste their time rehashing old issues instead of writing code 😡.

                                                                                                  1. 1

                                                                                                    Also, the rejected proposal was their own. Seems like basic intellectual honesty to at least refer to the earlier rejection, and state what if anything is different this time around.

                                                                                                  1. 2

                                                                                                    They conclude that new Spectre variants are unavoidable due to the performance benefits of hyperthreading, but I saw an opinion piece by a formal methods researcher outlining the changes that are required to processor specifications to prevent future Spectre attacks. However, I can’t find it now. Anyone know of the piece I’m speaking of?

                                                                                                    1. 1

                                                                                                      I want this API. I’m tired of docker and the mess it has created.

                                                                                                      1. 1

                                                                                                        What mess has it created? I’m a bit out of the loop with the whole container world but we are evaluating things for work and I would like to learn more about what is the current state of containers.

                                                                                                        1. 4

                                                                                                          It’s the least stable piece of system software I’ve ever used. If you’re trying to use more containers then I recommend using LXC. I wrote about some of the issues here: https://www.scriptcrafty.com/2018/01/impulse-response-programming/

                                                                                                          1. 2

                                                                                                            Docker is implemented using a daemon with root privileges … there’s a reason all the adults in the room hates how Docker was originally built.

                                                                                                            1. 1

                                                                                                              Didn’t docker originally use LXC? The OP comment seems to be advocating LXC in a later reply as an alternative to docker. IIRC, the last time I tried to configure LXC to run in ‘unprivileged mode’, it was a major pain in the butt (but maybe this has improved?) What alternatives do you suggest?

                                                                                                        1. 2

                                                                                                          This article overflowing with some remarkable statements but this one is just amazing:

                                                                                                          Front-end development is complex because design is complex.

                                                                                                          Front-end development is complex because HTML grew via tug-of-war processes out of a heavily restricted SGML dialect with bizarre semantic/presentational crossover, CSS is straight-up insane (apart from grids which took >6 years to get implemented sufficiently broadly to have any effect, and its inheritance model and arcane platform incompatibility is still pretty nuts), and JS, though evolving from its original flat-out craziness with each iteration, is increasingly reliant on an increasingly byzantine toolchain to allow it do so, and suffers heavily from a sprawling, fragmented ecosystem perpetuated by extreme quality differentials and rampant wheel-reinvention, partly because people just like reinventing wheels but partly also because square, triangular, ovoid and zig-zag wheels are really bad for driving on. Implementing complex design on top of all that multiplies complexity, sure, but it’s hardly the cause.

                                                                                                          1. 3

                                                                                                            Came here to share a similar sentiment: HTML, CSS, and JS are adopted children with lots of baggage. However, I disagree that this negates the author’s point: “solutions” to front-end development often involve an abstraction over one or more of the three technologies. What separates this from the standard “We can solve any problem by introducing an extra level of indirection …except for the problem of too many levels of indirection.” is the extreme number different abstractions, accelerated bit-rot, and level of inoperability.

                                                                                                            The root problem is that HTML, CSS, and JS are evolving technologies, which slowly erode whatever advantages a given abstraction has to offer. Polymer (which was sold as a library) went through 3 major point releases in three years and is now deprecated entirely. Angular has gone from version 2 to version 7 in two years. Frameworks have turned to micro-libraries to try and cope, but now a given “Angular” or “React” project might use half-a-dozen of different technologies that make them incompatible with another “Angular” or “React” project.

                                                                                                            Even if you try to hew closely to the original language, small differences between the proposal and the eventual standard will result in migraines for any large project. Babel (stupidly) transpiling import to require is a major reason Node.js adopted .mjs for all JS modules. TypeScript began as a straightforward superset of JavaScript, but JavaScript’s enhancements have slowly eroded compatibility and are often superior to what TypeScript is offering. I don’t see how TypeScript can adopt JavaScripts class members and other features without major structural changes for itself and all downstream projects (like Angular).

                                                                                                            I think the current landscape of frontend tools is the result of engineering practices at Facebook and Google: they have siloed technology stacks, large amounts of natural code churn, a monetary incentive to shave milliseconds of load time off of their websites, and armies of developers to service that technical debt. The rest of us want to be able to share code (god forbid data binding work across frameworks!) and not have to worry about major refactors every 6 months.

                                                                                                            1. 1

                                                                                                              All very good points, particularly the last para. Sometimes “the rest of us” get caught in the cross-fire. I mean, on the one hand obviously it’s amazing that we get free open-source tools to use (and that extends to Kubernetes et al on the back-end too), and I think React is pretty great in a lot of ways, but the pace of changes makes maintaining them complex and time-consuming, and in the case of e.g. Kubernetes it’s so heavily engineered that for “the rest of us” who don’t have planetary-scale deployments it can become massive overkill, yet it’s where everyone heads and piles all their support because Google are doing it. (I realise that’s kind of an extreme statement because there are plenty of people with larger deployments who benefit from it and even a small deployment can benefit from some features, but I think the general point stands.)