1. 4

    As former user of XMPP, let me try a different list:

    1. XMPP is a morass of partly interoperable servers and clients, each supporting different long lists of extensions.
    2. Stuff doesn’t/didn’t work. I have a fine camera and a microphone, you do too, so does XMPP mean we can talk? The answer involves extensions in the plural and is too complex for my brain as user. My phone is always on, can it get notifications without burning through its battery? The answer is again too complex for my brain.
    3. Google stopped talking to other XMPP servers after a (rumours say debilitating) spam attack. AFAICT XMPP still doesn’t have effective defense against this particular attack, which doesn’t make me feel good about the readoption by Google or others.
    4. Most users used a few servers (during the time when I still used it actively), so XMPP suffered the sluggishness of decentralised protocols (see 1, 2) but without getting their advantages.

    I have busines cards with an XMPP address. I stopped handing those out long ago.

    1. 3

      Eh, I’m going to disagree with some of these.

      1 and 2 are easy to answer. For the client, you should use Dino on the desktop, and Conversations or Monal on mobile depending on your platform. If you want to use another client, you are now presumed to be an expert and able to solve any problems you have with it. For the server, you should use a server set up by someone who is up to date with the current state of XMPP; when you connect to it from Conversations, the server info should show that all the requested features are fully supported.

      3 is fair enough, but honestly, SMTP doesn’t have an effective general defense against spam, either, and that doesn’t stop people from using it. My understanding was that Google stopped talking to other XMPP servers mainly because they had enough marketshare that they didn’t need to anymore, and preferred to lock their users in.

      I don’t know about 4, or whether it’s still true.

      1. 3

        Are you saying XMPP is a federated protocol with zero to one recommendable clients per platform? IF that’s an accurate assessment then I think it can be added as a fifth problem on my list.

        In re Google, you can explain anything with “because is evil”, “because wants lockin” etc, and a lot of lazy people do. You should at least consider the possibility that they dropped XMPP because it wasn’t used enough to deal with the hassle of contact-request spam. Spammers used XMPP contact requests to get added to people’s contact list and then sent spam via SMTP. There was a bad wave of that, Google had to choose between either decreasing the spam-sign value of people’s contact lists or getting rid of those XMPP contact requests, and used an axe to do the latter. SMTP was important, XMPP was just nice to have.

        Now, that’s hearsay. (Almost everything I’ve heard about Google’s antispam mechanisms is hearsay.) You get to judge: Is it more or less plausible than just “preferred to lock their users in”?

      2. 1

        This is a reply to myself because it’s a digression:

        I noticed that the developers of a couple of XMPP tools didn’t use XMPP addresses. They suggested that one might talk to them via XMPP, but by sending a private message to nickname in chatroom rather than by a plain XMPP message. IRCish behaviour rather than XMPPish. I didn’t understand why, but whatever it is, it suggests to me that there’s an impedance mismatch somewwhere. I’d love to understand better.

        1. 1

          XMPP is a morass of partly interoperable servers and clients, each supporting different long lists of extensions.

          The situation is not so dire as you paint it.

          The answer involves extensions in the plural and is too complex for my brain as user

          Live Audio/Video chats are a complex topic. Even more if you go for multi-party conferences. But you as user should not encumber your brain with it, that is something for the developers implementing those protocols. :)

          My phone is always on, can it get notifications without burning through its battery? The answer is again too complex for my brain.

          That, again, is a complex problem. The basic solution is simple: Use XEP-0352 Client State Indication. The tricky part is the magic where the server has to decide when to push a notification to you. Do you want it for every message? Or just for some? Maybe dependent on the sender? Maybe dependent on the channel? Or only if someone mentions your nickname in a chat? And this only for same chat rooms?

          1. 1

            I agree with your list, but have to note that 4 (and to a lesser extent 3) seem to be true for pretty much every federated/decentralized system in existence, which suggests a more fundamental problem with the concept of federated services. Every once in a while there’s a post on here philosophizing about that, e.g.

            1. 1

              Sure, I know…

              Federated services need to be designed so that there’ll be few interop-relevant feature differences, because getting new changes widely deployed is so difficult. XMPP suffered because it had that problem, and suffered particularly badly because was heavily oriented towards extensions, and therefore really needed ease of deployment and interop.

          1. 11

            Being a TA for a systems programming course in a nutshell. Except the students’ code usually doesn’t end up in any OS.

            1. 1

              I can’t imagine there are actually many systems in production running FreeBSD with telnet installed.

              1. 2

                Your imagination falls way short of reality then: the log mentions “Obtained from: Juniper Networks” - Junos, the OS powering Juniper devices, is based on FreeBSD and in my experience it is still not uncommon for routers and switches to be administrated via telnet (they certainly all support it).

                So not only are there devices running telnet on FreeBSD in production, but they are very lucrative targets for infiltrating a network, since owning a router/switch will get you MITM capabilities.

                1. 1

                  Either Juniper is carefully reviewing every change they pull in from FreeBSD or nobody should be giving them any money.

                  1. 1

                    Errrr… well, the bogus commit came from Juniper Networks upstreaming their local changes, and the attention is the number of replies by people watching FreeBSD commits.

                    The local change is made because of this report by hacker fantastic.

                    1. 1

                      I thought the whole point of FreeBSD is that you never have to deal with upstreamed vendor code. If Juniper publish their changes why don’t they just run Linux?

                      1. 1

                        They probably hired a bunch of already freebsd developers and they’re asking their employer to upstream the changes when it’s reasonable to do so.

                        It’s a logical business decision, too: local patches are a pain, they require merging on updates and mis-merges are a frequent cause of bugs.

                        Side-note: upstreaming is a bit more work than what the GPL requires. Upstream projects don’t like receiving “some unknown old version of your tree with this added work is available as a tarball here”, it usually requires some work. I’ve tried to do it in the past unsuccessfully. Even once it built and worked with the new version, I had trouble with questions about why things were done a certain way, or how to fix test failures.

            1. 5

              Trading gold for lead. Two scoops of attack surface and complexity for speed when what we have is already quite fast.

              It will happen though, since the gain is easily quantifiable, will make great charts, and everyone in the space will have to do it to keep up with the joneses.

              Whereas the value of not having another JIT compiler inside your kernel will be much harder to quantify until the SHTF.

              1. 2

                Existing JITs are problematic from a security perspective mostly because they are used to execute untrusted code in a trusted environment, which is not the case in the described scenario: kernel code is executed in the kernel like it always was, only with some specialization applied at runtime.

                1. 2

                  “because they are used to execute untrusted code in a trusted environment” “kernel code is executed in the kernel like it always was”

                  Not the whole picture. The kernel might execute trusted code on malicious inputs passed through via compromised apps or attempts to compromise something. By itself, that can lead to a kernel attack. The JIT adds the possibility that the specialization process introduces vulnerabilities that weren’t there before. Both the AOT compiler and app might have gotten testing the user-specific JIT-ed code didn’t. So, you get one, extra layer of attack surface with the JIT. That they have to do less work than AOT compilers puts upper bound on their security in practice, too.

                  1. 1

                    That part was clear enough from the article.

                    Adding a turing-complete JIT to a kernel for a little more performance is ill-advised. He mentions BPF, though that’s apples and oranges–BPF isn’t turing complete. And in this one phrase I count three tasks of the never-ending variety:

                    for deployment we’d probably end up either reusing a lighter-weight code generator or else creating a new one that is smaller, faster, and more suitable for inclusion in the OS. Performance of runtime code generation isn’t just a throughput issue, there’ll also be latency problems if we’re not careful. We need to think about the impact on security, too.

                1. 4

                  Reminds me of Snabb Switch, a Lua userspace networking framework that heavily relies on LuaJIT for its high performance. Interestingly Regehr also mentions that he sees the most potential for this technique in the network stack.

                  1. 21

                    Hot take: $GOPATH is The Only Good Part of Go. I now clone all git repos as ~/src/github.com/user/project.

                    Super hot take: Go is an anti-intellectual language. The attitude of the creators of Go sounds a lot like “hurr durr screw these academics with their dependent linear generic polymorphic magic types, we want the good old days of C back but with concurrency now” and “programmers can’t be trusted to use advanced constructs, just give them simple C stuff anyone can learn in a day”. Rob Pike thinks that “Syntax highlighting is juvenile”. WTF?

                    Extra hot take: the most offensive part of Go is the internals of the main implementation. Don’t ever let people with a Plan 9 fetish design a compiler. Why? Just read the readme of c2goasm, a workaround that lets you call non-Go code without cgo overhead. And look at the Go assembler itself. If there’s one thing that AT&T and Intel syntax fans will 200% agree on, it’s that the Go/Plan9 syntax is an abomination. (Also it doesn’t fucking support the instructions (and even addressing modes!) you might need. Literally that documentation page tells you with a straight face to just encode your fancy instructions as byte constants: BYTE $0x0f; BYTE $0x6f; BYTE $0x00. Are you kidding me?!)

                    1. 14

                      Cold take: 99% of people programming in go will not be writing plan9 syntax assembly.

                      1. 11

                        99% of people programming in go are welcome to ignore anything about go internals. But I think people deserve to know that the internals are very weird.

                        1. 6

                          I heard/read that the rationale was that plan9 stuff might have been weird, but Rob Pike and Ken Thompson (soon followed by Russ Cox) were all deeply familiar with the entire toolchain and figured they could get cross architecture/cross OS compilation working quickly and build from there. It might have been a weird toolchain, but they did write it, so they were starting with a known quantity.

                          1. 8

                            That doesn’t sound like a great rationale for a production language sponsored by a huge serious company.

                            Wait, actually, this is a very Google thing to do. There’s a history of big complex projects built at Google that are mostly incomprehensible to anyone outside of Google and don’t integrate well with the outside world. Like GWT.

                            1. 9

                              Google was only incidentally involved in the design of Go. Go was made by 3 smart people, and bankrolled by a “huge serious company”.

                      2. 2

                        Super hot take: Go is an anti-intellectual language.

                        That’s good, actually.

                        1. 2

                          Hot take: $GOPATH is The Only Good Part of Go. I now clone all git repos as ~/src/github.com/user/project.

                          GOPATH is the single worst misfeature of any language I’ve ever used. Any language that tells me how to lay out my code, where to put it? I hate them all. Java forces you/encourages you heavily to put one public class per file. Blergh. Go forces you/encourages you heavily to use GOPATH. Blergh. Anything like that is just shit. C is one of the best, because all of its ‘module’ stuff is an emergent property of people using the language and its preprocessor, not some pre-designed over-engineered module system with namespaces and complicated lookup paths.

                          Super hot take: Go is an anti-intellectual language. The attitude of the creators of Go sounds a lot like “hurr durr screw these academics with their dependent linear generic polymorphic magic types, we want the good old days of C back but with concurrency now” and “programmers can’t be trusted to use advanced constructs, just give them simple C stuff anyone can learn in a day”.

                          That’s not anti-intellectual. It’s not fucking anti-intellectual to not want to write Haskell or some exceedingly overly complex reimplementation of the worst half of Haskell like most languages have become.

                          Rob Pike thinks that “Syntax highlighting is juvenile”. WTF?

                          Syntax should be readable without syntax highlighting.

                          1. 7

                            That’s not anti-intellectual. It’s not fucking anti-intellectual to not want to write Haskell or some exceedingly overly complex reimplementation of the worst half of Haskell like most languages have become.

                            I thought you might want to know that this reads like you’re misdirecting your anger at Haskell towards programming languages as a whole. In general, it’s difficult to be told to accept something by someone who is not very accepting.

                          2. 0

                            It’s an anti-“hurr durr I’m an intellectual so do as I say because I know what’s best for you” language and that is not a bad thing. Every single go team member probably has a more solid understanding of (any arbitrary aspect of) compsci than you do.

                            1. 18

                              Every single go team member probably has a more solid understanding of (any arbitrary aspect of) compsci than you do.

                              … And therefore you aren’t allowed to have dissenting opinions?

                              Come on now.

                              1. 5

                                Which part of my comment made you think I want to police opinions? I just made a counter argument to the “go is bad because it’s anti-intellechual” line, dunno how you got to the meta level. I do get the feeling now though that you would very much like to police my “dissenting” opinions, hmm?

                                Anyways, Miles understands the point I was trying to make and put it quite well I think, so no need to elaborate on that.

                                1. 1

                                  Your comment is based on an assumption that the OP isn’t as smart as Rob Pike et al.

                                  I don’t care about your opinion. Have whatever shitty opinion you want—that’s your prerogative.

                                  1. 2

                                    While that seems like a fairly safe assumption to me (for suitable meanings of “smart”), it is not actually a prerequisite to my point.

                                    All I am assuming is that the Go designers do in fact know a decent amount of compsci but chose not to include various concepts for reasons that go beyond “academics came up with it”, i.e. they evaluate ideas on their own merits, instead of assuming anything originating in academia is either good or bad. Now, obviously their reasons and decisions can be debated, but implying they boil down to nothing more than a dislike of academia is either dishonest or idiotic (or both) and contributes nothing of any value to the decision making process.

                                2. 3

                                  No, but therefore you aren’t allowed to describe them as anti-intellectual. Intellectual giants aren’t anti-intellectual.

                                  1. 4

                                    Hmm. I am not sure you understand anti-intellectualism, but maybe it, as a concept, needs to change to be more inclusive of experience gained from years in industry.

                                    Calling Rob Pike et al, anti-intellectual makes sense from the text book definition because they’ve eschewed everything but garbage collection from academic CS from the last 30+ years, defining the language, instead, based on personal feelings from industry experience.

                                    I am certainly open to citations that suggest otherwise…please include the citation’s publish date as well.

                                    1. 5

                                      Just because academics have designed a feature, doesn’t mean that feature needs to exist, nor does it make the feature useful

                                      1. 3

                                        Just because Rob Pike says no to a feature, doesn’t mean that the feature shouldn’t exist, nor does it make the feature not useful.

                                        1. 4

                                          Nobody is saying that Rob Pike’s word alone is a good reason to not have a feature in a programming language.

                                          You say Rob Pike et. al. have ignored everything from academic CS from the last 30+ years. Maybe, but if you actually look at the last 30+ years of academic CS (in the area of programming languages) the most visible portion of it is an intense focus on very strongly and statically typed pure functional programming languages: ML derivatives, Hindley-Milner type inference algorithms extended to extensions of System F, dependent typing, etc.

                                          What widely programming language out there, other than Haskell, doesn’t ignore everything from the last 20 years of programming language research at least? Java? C++? C#? Javascript? Python? Rust? What about Rust’s type system is actually based on research done in the last 20-30 years?

                                          Parametric polymorphism is a lot more than 30 years old.

                                          1. 4

                                            Rust? What about Rust’s type system is actually based on research done in the last 20-30 years?

                                            As a matter of fact, more than the Type System has been influenced by academia… https://doc.rust-lang.org/1.2.0/book/academic-research.html

                                            As for the rest of the languages you mention… research is trying to unfuck C. C++ — I don’t follow its development well. Python has always ignored academic influences, and its creator has said some really dumbfounded things over the years, to boot.

                                            Java was born directly out of industry, but over the years, heavy research (not sure how much is purely academic vs industry) has gone into implementation, the garbage collection algorithms, nio, the recent proofs if the type system’s unsoundness…

                                            JavaScript was built in 2 weeks, but recent ECMAScript has authors like, Dave Herman, who has a PhD with a PLT focus from Northeastern…

                                            So, I think your statement, and position are categorically wrong.

                                            (Please excuse the lack of links, and brevity, I am on my phone and eating doughnuts)

                                            1. 0

                                              As a matter of fact, more than the Type System has been influenced by academia… https://doc.rust-lang.org/1.2.0/book/academic-research.html

                                              And Go’s concurrency system is influenced by academia. Everything in every language is influenced by academia. What I asked was what about Rust’s type system was influenced by recent research, given that the criticism of Go is that its type system isn’t based on recent research.

                                              As for the rest of the languages you mention… research is trying to unfuck C. C++ — I don’t follow its development well. Python has always ignored academic influences, and its creator has said some really dumbfounded things over the years, to boot.

                                              All of this seems to suggest that being based on recent research isn’t actually a relevant factor for comparing how good languages are, and thus that the criticism of Go as not being based on recent research and being bad as a result, or bad because its creators ignore recent PLT research, or whatever, is clearly bogus.

                                              1. 4

                                                And Go’s concurrency system is influenced by academia. Everything in every language is influenced by academia. What I asked was what about Rust’s type system was influenced by recent research, given that the criticism of Go is that its type system isn’t based on recent research.

                                                Your original “recent” was “within the last 20-30 years”. Incidentally, the thing from Go that is most often cited as “influenced by research” is it’s weird interpretation of CSP by Hoare, originally published > 30 years ago in 1978…

                                                The Academic Research page I linked for Rust (Specifically, the Types Influence):

                                                • Region based memory management in Cyclone (2002)
                                                • Safe manual memory management in Cyclone (based on it’s references, after 2005)
                                                • Typeclasses: making ad-hoc polymorphism less ad hoc (1997?)
                                                • Macros that work together (2012)
                                                • Traits: composable units of behavior (2003)
                                                • Alias burying (2001?)
                                                • External uniqueness is unique enough (2002?)
                                                • Uniqueness and Reference Immutability for Safe Parallelism (2012)
                                                • Region Based Memory Management (1994)

                                                Now that I’m not eating doughnuts, nor on my phone:

                                                All of this seems to suggest that being based on recent research isn’t actually a relevant factor for comparing how good languages are, and thus that the criticism of Go as not being based on recent research and being bad as a result, or bad because its creators ignore recent PLT research, or whatever, is clearly bogus.

                                                What is a “good language”? It’s pretty subjective. What’s good to me, isn’t good to you. What’s interesting is how well the Blub Paradox, describes what’s going on here.

                                                When I use Go, I cringe at the fact that I can’t safely create a closed enumeration that can’t be nil. You can do weird things with interfaces (in fact, there’s a solution in this very thread), but as soon as you do, then a value can be nil, and now you’re broken. Which leads me to the inclusion of nil, The Billion Dollar Mistake… somewhat hilariously proclaimed by the influencer of Go’s crowning feature, “Something CSP like,” go routines!

                                                Many Go programmers I talk to suggest “being careful and using iota is good enough.” Those same programmers will say that “there’s no way you can be careful enough to use C.” I guess carefulness is a spectrum? I’d prefer computers to remove some of my need to be careful instead. After all, research has shown that computers are much better at it than humans…

                                                So, when Go’s type system is a small step up from C in power, despite 47+ years of “we can do better than that,” you’ve gotta wonder what’s going on?

                                                My theory is that years of developing operating systems with C taught the Go authors how to avoid getting shot with their foot gun. However, instead of destroying the foot guns, they figured, “well, if we keep the foot guns on a shelf and tell people not to touch them, they won’t get shot.” The people yelling loudly about Go not adopting PLT research are the people who clearly see the foot guns on the shelf, and the stool right next to shelf.

                                                ~~

                                                BTW, an important question that we’re not bringing up here is “what is even research?” Obviously, the folks making proposals to change Go are doing some sort of textbook definition research. Is it any different than Brian Goetz’s work on Project Valhalla, who is doing “research” within Oracle? Do we constrain the set of relevant articles to those appearing at a major conference, like OOPSLA, POPL, ICFP? That seems silly. A bigger question, for sure.

                                3. 7

                                  Do you know the person you’re responding to or are you just insulting them because they said something you don’t agree with?

                                  1. 3

                                    I do not know them and I did not write what I did to insult them, just to put what they said in some context. My comment applies equally well to me and with some specific exceptions probably to everyone in this thread. No shame in not being a genius.

                                  2. 4

                                    that’s literally just more anti-intellectualism lol

                                    1. 4

                                      If you aren’t ready to admit that some kinds of intellectualism can be bad, I don’t think I have anything to say to you.

                                1. 1

                                  it is easy and cheap to shit all over a volunteer run event for people of a niche interest, but it takes effort to run such an event. I prefer people that do the latter to the people that do the former.

                                  Why is the author so full of hate? If they used that energy to do something constructive, we would all be better off.

                                  1. 5

                                    it is easy and cheap to shit all over a volunteer run event for people of a niche interest, but it takes effort to run such an event. I prefer people that do the latter to the people that do the former.

                                    well, volunteer and sponsor run. it’s a nice event, but the time i’ve attended it had quite a bit of the big money feeling, compared to c3 (which is not for free, but has reasonable prices compared to other conferences) and maybe froscon (which is also sponsored).

                                    n-gate doesn’t shit directly on the event, but on the unreflected tech-hypes. i like matrix, but got a good laugh out of placing the yellow-vests in context with the matrix adoption in france.

                                    Why is the author so full of hate?

                                    try to look at n-gate as satire. you may not like it in this case, but satire is a valuable thing to have in societies.

                                    If they used that energy to do something constructive, we would all be better off.

                                    i bet they do many constructive things, but maybe are angry about the current tendency of anti-federation and overengineering which makes hacking on things hard. i could also ask you to use your energy for something constructive and not for criticizing n-gate :)

                                    1. 3

                                      well, volunteer and sponsor run. it’s a nice event, but the time i’ve attended it had quite a bit of the big money feeling, compared to c3 (which is not for free, but has reasonable prices compared to other conferences) and maybe froscon (which is also sponsored).

                                      You seem to attend very different commercial events than I do. At FOSDEM are no sold talking slots, the dev-rooms are community organized, there are no stands with hordes of sales people trying to find leads or any other thing that you may find at what I would call a commercial event.

                                      Also, the CCC has a bigger personal backing than Fosdem. They are not just an org for one event, they are a big decentralized organization. BTW: They also have sponsors for things like internet connectivity..

                                      n-gate doesn’t shit directly on the event, but on the unreflected tech-hypes. i like matrix, but got a good laugh out of placing the yellow-vests in context with the matrix adoption in france.

                                      FOSDEM is everything else but hype in my book. Where else can I see a talk about the latest developments of GRUB or debian developers showing new things? Also calling somebody who works somewhere a “corporate drone” is not satire, but childish. Is the author 16 and has a Che Guevara poster on his wall, or what?

                                      try to look at n-gate as satire. you may not like it in this case, but satire is a valuable thing to have in societies.

                                      It is just not well done and criticising bad satire is also a part of societies.

                                      i bet they do many constructive things, but maybe are angry about the current tendency of anti-federation and overengineering which makes hacking on things hard.

                                      Making fun of people who pour their time into a project of their desire is awful IMO.

                                      i could also ask you to use your energy for something constructive and not for criticizing n-gate :)

                                      Spreading love, not hate is a valuable contribution. Also, I have some time to kill on the train to Brussels.

                                      1. 1

                                        You seem to attend very different commercial events than I do.

                                        I don’t attend many events anymore in general, as I don’t get much personal gain from attending. Big crowds tend to consume my attention.

                                        At FOSDEM are no sold talking slots, the dev-rooms are community organized, there are no stands with hordes of sales people trying to find leads or any other thing that you may find at what I would call a commercial event.

                                        I never said that there are sold talking slots. I said that it had a feeling of big money involved. Sponsored coffee booths, lots of giveaways, some of them not cheap like stickers are.

                                        Also, the CCC has a bigger personal backing than Fosdem. They are not just an org for one event, they are a big decentralized organization.

                                        True

                                        BTW: They also have sponsors for things like internet connectivity..

                                        Ok, but that sponsor was non visible for me when I was there several times.

                                        n-gate doesn’t shit directly on the event, but on the unreflected tech-hypes. i like matrix, but got a good laugh out of placing the yellow-vests in context with the matrix adoption in france.

                                        FOSDEM is everything else but hype in my book.

                                        I never said that FOSDEM is hype, but that there are unreflected tech-hypes present there.

                                        Where else can I see a talk about the latest developments of GRUB or debian developers showing new things?

                                        Wherever they decide to show up ;)

                                        Also calling somebody who works somewhere a “corporate drone” is not satire, but childish. Is the author 16 and has a Che Guevara poster on his wall, or what?

                                        Placing ad-hominem arguments against the usage of ad-hominem arguments doesn’t work.

                                        […] criticising bad satire is also a part of societies.

                                        Agreed.

                                        Making fun of people who pour their time into a project of their desire is awful IMO.

                                        The threshold when things are considered awful is different for everyone. Politicians can also be considered to pour their time into a project they desire, too. Is it awful to make fun about them?

                                        Spreading love, not hate is a valuable contribution.

                                        Yes, but it can be a fine line between the two.

                                        Also, I have some time to kill on the train to Brussels.

                                        maybe the n-gate author had time to kill too ;)

                                        1. 1

                                          At FOSDEM are no sold talking slots, the dev-rooms are community organized, there are no stands with hordes of sales people trying to find leads or any other thing that you may find at what I would call a commercial event.

                                          I never said that there are sold talking slots. I said that it had a feeling of big money involved. Sponsored coffee booths, lots of giveaways, some of them not cheap like stickers are.

                                          The github coffee thing is nice. There is one little sign who the sponsor is and that is all. You can also opt-out of it unlike at commercial conferences, where every coffee break is sponsored by blabla and it is the only coffee available. Calling that coffee car as sponsored by “big money” is a bit of a stretch (yes github is now owned by MS; but that was not the case the last years).

                                          BTW: They also have sponsors for things like internet connectivity..

                                          Ok, but that sponsor was non visible for me when I was there several times.

                                          See infrastructure review, it usually gets mentioned there.

                                          FOSDEM is everything else but hype in my book.

                                          I never said that FOSDEM is hype, but that there are unreflected tech-hypes present there.

                                          I spend a lot of time with people at work who love the latest hype (blockchain, IOT, kubernetes etc. etc.) and I like fosdem for being everything but that. There is a lot more fringe stuff and the presentations are less of an ego show thatn other tech events.

                                          Where else can I see a talk about the latest developments of GRUB or debian developers showing new things?

                                          Wherever they decide to show up ;)

                                          def. not at the latest hype conference

                                          Also calling somebody who works somewhere a “corporate drone” is not satire, but childish. Is the author 16 and has a Che Guevara poster on his wall, or what?

                                          Placing ad-hominem arguments against the usage of ad-hominem arguments doesn’t work.

                                          The author started with the ad-hominem, I am calling them out on their bs.

                                          Making fun of people who pour their time into a project of their desire is awful IMO.

                                          The threshold when things are considered awful is different for everyone. Politicians can also be considered to pour their time into a project they desire, too. Is it awful to make fun about them?

                                          Politicians are usually paid for what they do, I speak about volunteer based FLOSS projects. That is something else.

                                          1. 1

                                            The github coffee thing is nice. There is one little sign who the sponsor is and that is all. You can also opt-out of it unlike at commercial conferences, where every coffee break is sponsored by blabla and it is the only coffee available. Calling that coffee car as sponsored by “big money” is a bit of a stretch (yes github is now owned by MS; but that was not the case the last years).

                                            I think github was “big money” even before the aquision, but the free coffee booth isn’t the point. The point I made was that I had the feeling that there is money involved, more than I have had the feeling at, for example, the C3s I have been to.

                                            I spend a lot of time with people at work who love the latest hype (blockchain, IOT, kubernetes etc. etc.) and I like fosdem for being everything but that. There is a lot more fringe stuff and the presentations are less of an ego show thatn other tech events.

                                            Again, I don’t want to discuss the right of existance of fosdem. There are many nice “fringe” talks and dev rooms there. One could observe that n-gate doesn’t make fun about these, at least from my pov.

                                            Also calling somebody who works somewhere a “corporate drone” is not satire, but childish. Is the author 16 and has a Che Guevara poster on his wall, or what?

                                            Placing ad-hominem arguments against the usage of ad-hominem arguments doesn’t work.

                                            The author started with the ad-hominem, I am calling them out on their bs.

                                            I don’t think that it works this way.

                                            Politicians are usually paid for what they do, I speak about volunteer based FLOSS projects. That is something else.

                                            Ok, lets just take any random group, $FOO, of people who do $BAZ. Someone else, $BAR, will be against $BAZ and makes fun of the random group $FOO. Is it ok for $BAR to make fun about $FOO, or not?

                                    2. 3

                                      It’s satire. I’m a satirist at work. People will love it, roll their eyes (usually with a smirk), ignore it, or occasionally say “Hey now!” I’m nice enough to keep the last to the minimum since I’m not trying to deeply anger or hurt people. It should be a temporary shock that mostly entertains them when they recognize what I’m doing.

                                      The best satire is based on truths that need to be called out. A lot of what he says in that article is true partly or totally. His critiques should provide an opportunity for reflection. He can do it as boring, factual pieces or a humorous style we can make fun of. The latter gets more attention. It’s not boring. It’s also provocative: something shown in psychology to get more attention than being nice due to how primitive parts of human brain work. So, you could say he’s being useful and a dick at the same time. Partly to call out problems. Primarily for his and others’ amusement.

                                      1. 1

                                        Why is the author so full of hate?

                                        Do you really want to know?

                                      1. 8

                                        We must make software simpler. Much much simpler.

                                        I am not sure whether this is feasible. Abstractions exist for reasons, one of them is to forget. Realistically, if Node.js developers are required to understand V8, not many things would get done. If depending on V8 (de-facto proprietary software, if there was any) is okay, and I think it is, I don’t see much difference for depending on other things.

                                        1. 6

                                          Abstractions exist for reasons, one of them is to forget.

                                          Indeed, but most of the obstacles for understandably don’t come from abstractions but from layers of indirection which do little to provide more introspection affordances to help the programmer understand the system and increase the surface area for bugs immensely. As noted by another article posted under this story:

                                          Accumulation of unnecessary code dependencies also makes software more bug-prone, and debugging becomes increasingly difficult because of the ever-growing pile of potentially buggy intermediate layers.

                                          We programmers are so accustomed to intermediate layers that most of us conflate the two. A good example of the complexity reducing power of abstraction is Plan9’s commitment to everything is a file (for real, no ioctls).

                                          Realistically, if Node.js developers are required to understand V8

                                          The whole point of the article is that something like V8 shouldn’t exist in the first place. There are more than enough examples of smaller code bases, from VPRI’s work to the the Tiny C Compiler, that show that such a world is possible today.

                                          1. 1

                                            Abstractions exist for reasons, one of them is to forget.

                                            But all abstractions are leaky (to varying degrees), so details from lower levels will bleed into the upper levels no matter what you do. This includes restrictions, (wanted or unwanted) features, performance limitations and security problems. If you can cut out a layer of abstraction without losing too much expressiveness, it is almost always better.

                                            1. 4

                                              Case in point: We were hit with a problem caused by a fix for the recent Ghostscript vulnerability in ImageMagick. The fix simply disabled PDF processing in ImageMagic’s policy.xml, which caused our perfectly functioning code to stop working. The code would send out an e-mail with a PDF containing some information. After the fix, ImageMagick would silently start creating an empty PDF file (it’s PHP, did you expect it to signal an error in a decent way?).

                                              This involves two or even three levels of abstractions, depending on how you look at it: Our image writer happened to use ImageMagick to make the PDF, and ImageMagick happened to use Ghostscript, so it had to be disabled at the configuration level by the sysadmin. If ImageMagick didn’t use Ghostscript, this problem would not have existed. Of course, the tradeoff is that this would mean it would need to use its own PDF processing instead. And the vulnerability could’ve been there too (but that’s perhaps less likely if it didn’t contain a full fledged Postscript interpreter, which in our case was completely unnecessary as we were only generating PDFs, not reading them).

                                              1. 4

                                                I agree with what you are trying to say but I see no abstractions here, just layers of indirection. I don’t think the problem is leaky abstractions, but rather that we (myself included) don’t know how to properly abstract. Because understanding the current system is unfeasible we just want to add a layer on top that treats the underlying system as a black box to do what we want and call it a day. And so the onion keeps growing and growing.

                                                1. 2

                                                  I highly recommend Richard Gabriel’s book Patterns of Software, the first three chapters (“Reuse versus Compression”, “Habitability and Piecemeal Growth” and “Abstraction Descant”) are about exactly this topic. Written by the author of the “Worse is Better” essay, probably the most profound thinker on software development that I know of.

                                              2. 1

                                                But all abstractions are leaky (to varying degrees)

                                                Most are. DSL’s show us we can have non-leaky abstractions for a lot of things.

                                                1. 1

                                                  Could you elaborate with some specific examples? From the top of my head, I don’t see how DSLs would be inherently better than any other abstractions; to me they feel mostly the same, i.e. most of them are leaky, though from time to time one may strike a pure one (like say, SQL? Dockerfile DSL?). Isn’t every function call a “DSL statement”, and every “framework” a DSL, just with a covoluted syntax?

                                                  1. 1

                                                    State, machine compilers are an easy example where you just list the functions, states, and transitions. It generates all the boilerplate. Then, you mode switch to fill in the blanks.

                                                    Similar stuff can be done with GUI’s since they’re FSM’s or close. Might even count a subset of HTML with CSS 1 or 2 in there. They stay really close to document with styling metaphor. It only got leaky if including Javascript. That makes sense, though, given that transforms it into an executable program. I still used to use well-designed snippets from DynamicDrive.com, like menus, before I learned JavaScript since you just filled in the blanks. XML later fixed that abstraction by switching from document to data models.

                                                    SQL is another many cite. Might fit here long as you are just doing queries.

                                                    1. 3

                                                      Hmm, I think I’m still not really convinced. In my opinion:

                                                      1. FSMs are maybe closest to what I’d call a good abstraction, in that they embrace the leakiness - by limiting themselves to what they’re good at modelling. The “blanks filling” in my eyes is where they beautifully support good cooperation with other abstractions.
                                                      2. HTML, then with CSS 1, then 2, in my opinion is an example where the abstraction was seriously leaky and thus eventually broke apart (via JS). The CSSes already are examples of trying to patch HTML into submission. Infamously, creating non-rectangle shapes was always a problem here. Also motions. I’d say TeX is a good example how the seemingly simple task of text/page layout is surprisingly hard, bordering on impossible.
                                                      3. As to SQL, I think what it benefits from is the consistent and complete (I think) mathematical abstraction of relational DBs it’s based on. That said, in my opinion it’s still leaky, by virtue of needing an interpreter/VM. Thus for any practical use, one still has to employ a lot of divination and guesswork (or often cargo culting) trying to please the optimizer spirits.

                                                      I think my main takeaway from the classic article about leaky abstractions is that all of them are, and thus the fact should be embraced by explicitly designing them to allow clean and painless escape valves. See the FSMs. Also unsafe keyword in Rust, or ports in Elm.

                                                      1. 1

                                                        Good points. I guess it depends on one’s definition of leaky. For me, Im fine with an abstraction so long as I can ignore what’s in it. Just call it conforming to the contracts. Others might have something different in mind.

                                                      2. 1

                                                        You must have a curious conception of non-leaky abstractions if SQL queries are to be included. And one where all of practical computing being built on non-leaky abstractions wouldn’t be all that much of an improvement on the status quo.

                                                        1. 1

                                                          Just the parts about describing what to pull from or put in what. It looks really high-level and purpose-built compared to platform-specific, low-level, imperative code it replaced. We even taught folks with no programming experience how to use that model with Access, 4GL’s, etc.

                                                          1. 2

                                                            That’s true, but when used in anger sooner or later you’ll still have to spend time with the implementation details of the database. Certainly if you want to make effective use of the hardware’s capabilities.

                                                            1. 1

                                                              Oh yeah, definitely true!

                                                  2. 1

                                                    I don’t really think most systems warrant as much complexity as they have. Alan Kay had a talk about building the right language for each layer of an operating system. He has a demo of two languages, one is a DSL to implements a rasterizer, one is a DSL for implementing network protocols. To be fair, I haven’t looked deeply into the code or run it, but I think the idea works.

                                                    For something that I have looked a bit closer at, Interim OS, is an OS implemented in C99 and Lisp. I’ve taken a weekend to read through the JIT code, and with a few more days, could probably start working in the codebase (on small to medium tasks). The entire kernel is written in 12K lines of C code and a few thousand of lisp. It’s capable of running on a Raspberry Pi, with a keyboard driver, basic VGA, TCP stack, and even an IRC client.

                                                    Granted, I can’t say I fully understand this code either, but it’s definitely reasonable for me to grok the whole OS in a few weeks. I don’t think most people have time to vet every piece of software. That’s why we have sandboxing. However, the OS, my browser, my crypto libraries, my programming languages, my developer tools, and the base software running on my system should all be written in a way that I can understand and extend it.

                                                    I feel like software changes too fast, and just keeps changing for no good reason – we very rarely reach the point where we can just stop messing with it and let it do it’s job. Feature creep is a real problem.

                                                  3. 1

                                                    Abstractions exist for reasons

                                                    Some do, some don’t, some have reasons but everyone would still be better off without them. Nobody is arguing we should just drop all the abstractions.

                                                  1. 13

                                                    That’s bad. So much for my hopes of Red Hat being an independent check against these big companies. If it’s actually $34 billion, then this tops WhatsApp as biggest acquisition if memory serves right. Makes sense on IBM’s part since they mostly be the farm on Linux on top of being major contributor to kernel.

                                                    1. 3

                                                      Do you think rhel has acted as competition to IBM in the past decade?

                                                      1. 9

                                                        IBM is a patent-trolling firm that turns everything they acquire into crap vs what they were. I imagine them owning Red Hat’s IP early on might have made Linux worse off. Instead, they developed independently in goals and style then interdependently in supporting the kernel. Now, that large, independent party is controlled by IBM.

                                                        I’d rather there be as large a number of companies as possible contributing to and/or influencing future of Linux.

                                                        1. 9

                                                          IBM has had a big influence on RHEL since early days. They started steering kernel development and taking advantage of Linux platform very early on thanks to some very smart executives. The Linux strategy enabled IBM to outsource OS development and create essentially a trust that broke the MSOFT monopoly without much investment and outside the imagination of anti-trust enforcement. Think about if, in 1998, IBM/Oracle/Novell/Intel had formed an OS company - it would have been the immediate target of an anti-trust action.

                                                          1. 5

                                                            Everything you say makes sense. The outsourcing OS development note is something I’ve always pointed out. They’re not fully freeloading off Linux but get way more value out than they put in. Smart strategy for them. Then, Novell grabbed Suse. The rest just build on top of the big ones with Shuttleworth’s Canonical being the outlier: a straight-up loss to make a desktop happen.

                                                            1. 4

                                                              They’re not fully freeloading off Linux but get way more value out than they put in.

                                                              Doesn’t everyone? :) Certainly true of myself.

                                                              1. 2

                                                                (Glances at laptop and wallet.)

                                                                Yeah…

                                                        2. 4

                                                          Absolutely. I don’t have any data but would bet that RHEL is easily the biggest competitor to IBM’s mainframe/enterprise computing stuff.

                                                          1. 3

                                                            IBM does a huge Linux business that relies on RHEL. You could view RHEL as a low cost vendor to IBM.

                                                          2. 3

                                                            Without a doubt. In big enterprise businesses like finance where downtime is measured in the millions of dollars per minute, you shell out big money for IBM hardware and IBM software and IBM support contracts because you know with 100% certainty that what you get is going to work as well as they claim. If it doesn’t, it’s escalated until it does*.

                                                            All of that is well and good until one day someone notices that the company is shelling out way too much money annually on support for an aging Power server just to run some in-house Java web apps that would be just fine on a $50k Dell cluster running Linux. When you run an enterprise you always buy support and Red Hat is the industry standard in Linux support. So Red Hat really made a lot of headway into the lower to middle-end product space in enterprise datacenters, a space that IBM tried very hard to convince its customers didn’t actually exist. Until today.

                                                            * Of course, this is usually more true for IBM’s in-house stuff, there are a lot of cases where they’ve bought another company for their successful product, slap the IBM logo on it, and then let it languish.

                                                            1. 3

                                                              20 years out of date. IBM biggest revenue division is services/cloud. The mainframe business is profitable but shrinking. Linux is raw material, not competition.

                                                          3. 1

                                                            What do you mean by, “this tops WhatsApp as biggest acquisition”? WhatsApp was acquired by Facebook. And neither are even in the ballpark of the biggest acquisitions on our public markets.

                                                            Is it that this is the biggest “tech company” acquisition, for some definition of “tech company”? That might be. It’s interesting because it’s probably only a company like IBM that could afford – and even make sense of – a $20B market cap enterprise open source company like RedHat. But more than anything, IMO, it shows that this particular segment of the public tech market is running on a lot of smoke and mirrors. I am a multi-decade techie, and I can’t even tell you what IBM and RedHat truly do, other than sell overpriced enterprise support contracts for legacy systems to enterprises. No techie I know would choose either of them as “growth” opportunities on the fundamentals of innovation or new products; instead, your excitement about their businesses comes down to what extent you think Fortune 1000 companies will feel obligated to pay them an IT support tax, of sorts. Meanwhile, for IBM, which has been operating on smoke and mirrors for a long time now (especially “Watson”), this is just a multi-billion dollar enterprise tech confusion where it can continue to hide losses and buy time for a massive speculative innovation that will never come.

                                                            1. 4

                                                              Is it that this is the biggest “tech company” acquisition

                                                              That’s what I meant. The news wave over Facebook dropping $16 billion on WhatsApp talked about it being the biggest or one of the biggest. This justifiably topped that. Remember this is a company that sells hybrid of FOSS and proprietary software, though. Most big acquisitions aren’t for FOSS-oriented companies. It’s both a nice precedent for valuing others and one that might not be repeated given Red Hat itself might not be repeated. To put the deal size into perspective, HP acquired Compaq with its servers, CPU’s, and OS’s for $25 billion.

                                                              “But more than anything, IMO, it shows that this particular segment of the public tech market is running on a lot of smoke and mirrors. “

                                                              It often is. In IBM’s case, they’re a large company that doesn’t produce this sort of thing on their own. They have to acquire such innovations for ridiculously-large amounts. It’s a problem with their culture mostly but some is inevitable. This one is different, though. Like in vyodaiken and I’s conversation, these two have been pretty close for some time with IBM betting the farm on Linux for most servers and service revenue. Although it was outsourced (an externality), it looks like they’re internalizing it since it’s so mission-critical. Ensuring tens of billions in Linux-derived revenue continues to flow for years… the total being a humongous number… might be worth $20 billion one-time. Then they also get new tie-in products like with other acquisitions. I think it’s mostly them addressing a core dependency, though.

                                                          1. 5

                                                            Very cool idea and mad respect for actually pulling it off. Looking forward to the next posts, especially on handling state explosion!

                                                            1. 3

                                                              So it’s an allocator that puts each allocation in its own page and frees pages as soon as the allocation is freed . The performance is apparently not so bad as this approach usually gets, because they don’t have to call mmap()/munmap()/mprotect() to change page tables. They can alter page tables without making syscalls because they are running as a virtual machine (using vtx or svm).

                                                              One question, is the cure possibly worse than the disease? If the page tables are mapped at all times so that they can be altered with low overhead, does that mean that a stray out-of-bounds write could result in something really fun happening such as an attacker-controlled page of memory getting marked as executable?

                                                              I’m sure I saw something on here ages ago about a system that used a hypervisor to grant superpowers to mostly-ordinary user processes, speeding up calls like mprotect(), mmap(), mincore() and in-process page fault handling (for processes that want to do their own virtual memory by handling SIGSEGV signals) by some huge factor. It might have been Dune? https://www.usenix.org/node/170864

                                                              1. 3

                                                                So it’s an allocator that puts each allocation in its own page and frees pages as soon as the allocation is freed.

                                                                Not really. They still place allocations right next to each other in physical memory, with multiple allocations potentially sharing a page. But for each allocation on a physical page, they create a dedicated virtual-to-physical mapping to that page. That way each allocation has its own virtual address range which can be unmapped by free() and never reused (because virtual memory is abundant, unlike physical), but RAM is still used as efficiently as with any other allocator (they actually wrap the system allocator).

                                                                Of course you can still access different allocations sharing a page through each of the virtual aliases, so this does only helps with dangling pointers, not overruns or anything else really.

                                                                Also, their implementation is based on dune.

                                                                Edit: They talk about the issue of mapping the page tables at about 1:40:43 in the recording of the presentation.

                                                                1. 1

                                                                  Aha! Thanks for clarifying.

                                                                1. 0

                                                                  +100

                                                                1. 8

                                                                  Focusing on theft again, after taking a break for several months. I’m well into a massive restructuring (to add multi-core support), which feels like it’s been going on forever. Now the main stuff is working, but I’m in the long tail of making sure error handling, config hooks, etc. work, now that some of them may be split between multiple address spaces.

                                                                  Other stuff that should make it in the next release:

                                                                  • Failure tagging: If the user code sets an identifier for a failure, it can avoid shrinking paths that would change into other known failures, so that common failures don’t shadow rarer ones. I have a proof of concept of this already, but it needs to be redone to work with multi-core, because that splits its state between processes.

                                                                  • Possibly replacing the RNG with something faster than Mersenne Twister 64, since outside of user code, theft spends essentially all of its time in the RNG.

                                                                  • A better interface for passing config to user hooks, and moving more common configuration things into official settings (for example: running until a time limit, rather than a number of test trials).

                                                                  • Better benchmarking of multi-core behavior (especially shrinking), and tuning with servers with 30+ cores in mind.

                                                                  • A function that provides a CLI interface, with argument parsing.

                                                                  • Improving the bloom filter implementation. (That will also become its own library.)

                                                                  • Lots of misc. improvements to shrinking.

                                                                  Other things that are on the horizon, but will probably be pushed until after the next release so it doesn’t take another year:

                                                                  • Coverage-directed test case generation. I have a proof of concept using clang’s SanitizerCoverage interface – I don’t want to make theft depend on a specific compiler, etc., but know what the interface would look like to be able to utilize that sort of info, whether it comes from clang or something more project-specific.

                                                                  • Better structural inference during shrinking (and coverage-directed generation). This should make shrinking faster.

                                                                  • Sort sort of persistence API for generator input.

                                                                  Work:

                                                                  • Various performance improvements to our regex engine (with a focus on frontloading work at compile-time).

                                                                  • Implementing some succinct data structures, for efficiently querying a (currently) massive in-memory data set. I should be able to eventually open-source the parts that aren’t highly specific to our use case.

                                                                  1. 1

                                                                    Re RNG. I always used xorshift if speed over quality was goal. Plus, it’s tiny. A quick look led me to someone saying xoshiro-256 is faster with good, statistical properties. So, there’s a few to try.

                                                                    1. 2

                                                                      Thanks!

                                                                    2. 1

                                                                      Coverage-directed test case generation. I have a proof of concept using clang’s SanitizerCoverage interface – I don’t want to make theft depend on a specific compiler, etc., but know what the interface would look like to be able to utilize that sort of info, whether it comes from clang or something more project-specific.

                                                                      Oh wow, exciting! I actually started looking into the theft sources with the intention of doing this, great to hear you’re working on it yourself. Did you have any successes yet finding bugs that did not show up without feedback?

                                                                      1. 2

                                                                        I haven’t hooked it up into theft yet, I just confirmed that the interface will work like I expect – and there isn’t much point in starting until the multi-core stuff is done. It’s almost a complete rewrite (which is why it’s taken so long).

                                                                        I want to add a more general coverage feedback hook, which could also be used with (say) line number info from Lua’s debug hook, hashes of runtime data, or something else that can be easily monitored and would correlate to meaningfully different behavior.

                                                                    1. 8

                                                                      I love this kind of stuff because it seems young developers confuse the web for the internet. There is more than HTTP out there folks! For god sakes make your own protocols! It’s fun!

                                                                      1. 6

                                                                        I agree. Do you have any recommendation about how to learn to implement your own protocols?

                                                                        1. 11
                                                                          • Assume network drops or delays your packets indefinitely.
                                                                          • Use CBOR for binary protocols and JSON (one message per line) for plaintext protocols as a very safe starting point.
                                                                          • Anauthorized peers being able to grow other peers’ internal state opens up a possibility of cheap DoS attacks.
                                                                          • Don’t roll your own crypto.
                                                                          1. 7

                                                                            I’ll add that learning about and practicing with FSM’s plus FSM’s of FSM’s is good preparation. Most protocols that I looked into were FSM’s.

                                                                            1. 4

                                                                              Haha, just edited my post to say finite state machines are your friend. :-P

                                                                              1. 2

                                                                                Yeah. People sometimes make the mistake of assuming the network to be reliable and fail to factor in the drops, fixing them on case by case basis, turning the code into a horrible spaghetti mess.

                                                                                FSMs turn that into “what if I receive an init packet while waiting for a reply?” which leads to much more solid designs.

                                                                            2. 3

                                                                              Any time you need IPC within or across machines is a chance to implement a protocol. Generally, it’s not a good idea if you don’t know what you are doing, so I would first try on a hobby project. If you are getting paid for the work, do it when you have the chops to do it and the need.

                                                                              This goes for everything if you are skilled at making it, make it, otherwise use the work of those that are. Clearly, there is a chicken and egg problem, where you need to acquire the skill, and that’s where hobby projects or practice projects are great.

                                                                              EDIT: Pro Tip — Finite state machines are your friend.

                                                                              1. 1

                                                                                Do you have experience implementing protocols that are not your own? If not, start with that. You will learn a lot more about protocol design and implementation that way than by reading a textbook or blog posts or whatever.

                                                                                1. 1

                                                                                  I agree. I do have experience, but I want to know more about how other people learn and what they recommend since I might have missed something.

                                                                            1. 11

                                                                              Global warming is important, but realistically we can’t address it until we have regained political stability (and significantly improved on the pre-Trump status quo). Goals for the next 10 years are:

                                                                              1. keep my family safe
                                                                              2. avoid civil war, fascism, etc.
                                                                              3. repair the cultural rift that has people at each others’ throats

                                                                              If I can make impacts on longer term issues during that time, great, but it’s hard to think about right now.

                                                                              1. 5

                                                                                So, essentially you’re saying that since Trump was elected we are collectively incapable of doing anything but running in circles shouting about imminent fascism? Any efforts to improve technology wrt. environmental impact cannot realistically be expected to succeed, because politics? Seems like a terrible, self-defeating attitude to me.

                                                                                1. 1

                                                                                  Global warming is not a technological problem insofar as you can’t just invent a widget to solve global warming. Even if your widget is something like “planetary scale air filter”, you will not be able to build or operate it without social/political backing. Also:

                                                                                  If I can make impacts on longer term issues during that time, great

                                                                                  1. 4

                                                                                    It’s not a black and white issue, and it’s not going to be ‘solved’ by one major breakthrough. Their point is just that there’s no reason why the current political situation in the USA needs to bring everything to a halt. If you don’t have the time or headspace to deal with it right now, that’s absolutely okay (what matters is you’re aware of it)! Everyone’s circumstances are different, but collectively, we can’t afford to just put it on hold, and it doesn’t have to be at the expense of other important issues. If anything, I’d hope that it might have the power to bring people closer together (if a threat to humanity can’t do that, what can?).

                                                                                2. -1

                                                                                  Yes, you’re right that we can’t solve this problem with technical solutions. Other commenters notwithstanding..

                                                                                  1. 2

                                                                                    What makes you think that? Climate change is in many ways a technical problem, how do you think we are going to solve it if not by adapting our technology?

                                                                                    1. 3

                                                                                      Did mere technology or lobbying/sales decide what kinds of power plants will be all over many countries? Did technology itself create the disposable culture that adds to waste or did user demand? Is there a technological solution in sight for the methane emissions from cattle whose beef is in high demand? On other side, would we be storing endless amounts of data in these data centers appearing everywhere if technology didn’t make storage and computing so cheap? And is there a technological solution to avoiding them throwing that stuff away on a regular basis when customers want new stuff or manager want metrics to change? Is there a technological solution to getting people who neither care nor are legally required to care to stop doing damaging behaviors?

                                                                                      Sounds more like people-oriented decisions are causing most of the problem. Even if you create a beneficial technology, those people might create new practices or legislation that reduce or counter its benefits. Actually, that’s the default thing they do which they’re doing right now on a massive scale. I think we just got lucky with low-power chips/appliances since longer-lasting batteries and cheaper, utility bills are immediate benefits for most people that just happen to benefit the environment on the side.

                                                                                      1. 2

                                                                                        It is obviously not merely technology that got us here. But these problems are all about technology on a fundamental level and if we want things to change, we need the tech that makes these changes viable. No point lobbying for an alternative that does not exist.

                                                                                        Sounds more like people-oriented decisions are causing most of the problem.

                                                                                        Always an interplay of technology- and people-oriented decisions. But changing technology is much easier compared to changing people, which has resulted in utter dystopia many times.

                                                                                        Even if you create a beneficial technology, those people might create new practices or legislation that reduce or counter its benefits.

                                                                                        Same with well-intentioned legislation. But companies have no intrinsic incentive not to use beneficial technology, only to inflate its impact for marketing purposes (like the faked car emissions). They do have an incentive to game legislation, otherwise there would be no point to that legislation (in general; individual cases might profit from being good examples).

                                                                                1. 6

                                                                                  This is pretty far off-topic, and most likely to result in a bunch of yelling back and forth between True Believers.

                                                                                  Flagged.

                                                                                  EDIT:

                                                                                  OP didn’t even bother to link to the claimed “increasing evidence”. This is a bait thread. Please don’t.

                                                                                  1. 17

                                                                                    Shrug. I find the complete lack of political awareness at most of the tech companies I’ve worked at to be rather frustrating and I welcome an occasional thread on these topics in this venue.

                                                                                    1. 13

                                                                                      It’s possible that many of your coworkers are more politically aware than they let on, and deliberately avoid talking about it in the workplace in order to avoid conflict with people who they need to work with in order to continue earning money.

                                                                                      1. 1

                                                                                        All work is political. “Jesus take the wheel” for your impact on the world through your employment decisions is nihilistic.

                                                                                        1. 8

                                                                                          Not trumpeting all your political views in the workplace does not mean completely ignoring political incentives for employment or other decisions. I’m not sure what made you think GP is advocating that.

                                                                                    2. 3

                                                                                      Obviously “off-topic-ness” is subjective, but so far your prediction re: yelling back and forth hasn’t happened. Perhaps your mental model needs updating… maybe your colleagues are better equipped to discuss broad topics politely than you previously imagined?

                                                                                      1. 4

                                                                                        Obviously “off-topic-ness” is subjective, but so far your prediction re: yelling back and forth hasn’t happened.

                                                                                        Probably because everyone on this site is good and right-thinking — or knows well enough to keep his head down and his mouth shut.

                                                                                        (Which has nothing to do with the truth of either side’s beliefs; regardless of truth, why cause trouble for no gain?)

                                                                                        1. 5

                                                                                          To me, the people on this site definitely handle these discussions better. Hard to say how much better given that’s subjective. Let’s try for objective criteria: there’s less flame wars, more people sticking to the facts as they see them vs comments that re pure noise, and moderation techniques usually reduce the worst stuff without censorship of erasing civil dissenters. If those metrics are sound, then Lobsters community are objectively better at political discussions than many sites.

                                                                                        1. 5

                                                                                          These all seem to say one thing: climate change is going to be worse faster than some other prediction said. But that does not even remotely address your claim that “organized human life might not be possible by the end of the century and possibly sooner”. What on earth makes you think you know anything about what conditions humans need to organize?

                                                                                          1. 1

                                                                                            This is a good point. I guess my “evidence” would be past civilization collapse as a result of environmental destruction like what happened on Easter Island.

                                                                                      1. 2

                                                                                        I would love to see research on using an AFL-style genetic algorithm based on (branch) coverage feedback for generating test cases in a QuickCheck-style property testing framework. You could do that with clang’s -fsanitize-coverage options, similar to what libFuzzer does but with type-aware input generation, shrinking, etc.

                                                                                        1. 1

                                                                                          This is something I’ve been wanting to do as a side-project for very long time. Instrument the output of Elm compiler with code coverage stats in the runtime and use these stats from within the test runner for some kind of coverage-maximizing AFL-style fuzzers.

                                                                                        1. 30

                                                                                          yesterday afternoon, the community reply https://www.arm-basics.com/

                                                                                          1. 3

                                                                                            That was a nice reply :D

                                                                                            1. 4

                                                                                              Eh, it’s a pretty cheap shot, morally on the same level as the original page and much weaker in content. I hope it’s not representative of the larger RISC-V project.

                                                                                              1. 13

                                                                                                I’m not too impressed with the counter-FUD but I think it’s hilarious that the riscv-basics.com people didn’t think to register arm-basics.com while they were at it.

                                                                                            2. 1

                                                                                              Absolutely brilliant

                                                                                            1. 6

                                                                                              Are you confident that every single user of your systems is going to out-of-band verify that that is the correct host key?

                                                                                              If your production infrastructure has not solved this problem already, you should fix your infrastructure. There are multiple ways.

                                                                                              1. Use OpenSSH with an internal CA
                                                                                              2. Automate collection of server public ssh fingerprints and deployment of known_hosts files to all systems and clients (we do it via LDAP + other glue)
                                                                                              3. Utilize a third party tool that can do this for you (e.g., krypt.co)

                                                                                              Your users should never see the message “the authenticity of (host) cannot be established”

                                                                                              1. 4

                                                                                                Makes me wonder how Oxy actually authenticates hosts. The author hates on TOFU but mentions no alternatives AFAICS, not even those available in OpenSSH?

                                                                                                1. 3

                                                                                                  It only authenticates keys, and it makes key management YOUR problem. see https://github.com/oxy-secure/oxy/blob/master/protocol.txt for more details.

                                                                                                  I.e. you have to copy over keys from the server to the client before the client can connect(and possibly the other way from the client to the server, depending on where you generate them).

                                                                                                  1. 1

                                                                                                    Key management is already your problem.

                                                                                                    ssh’s default simply lets you pretend that it isn’t.

                                                                                                    1. 2

                                                                                                      Very true. I didn’t mean to imply otherwise.

                                                                                              1. 1

                                                                                                We won’t do much editing for grammar or meaning;

                                                                                                […]

                                                                                                We probably don’t need to talk about “f*cking moron”. The caps in “AND STANDARD” is another way to indicate frustration, like the “honestly” above. […] None of these carry any meaning about the technical problem; they’re just expressions of anger.

                                                                                                […]

                                                                                                This is a much better email. It has 43% as many words, but loses none of the meaning.

                                                                                                Do you see the problem? You are absolutely changing the meaning of the text… except for the technical bits. While the original email expressed anger and frustration at valuing standards over reality, your take makes the technical points alright, but stops there: it conveys none of the feelings the original author had and expressed in his rant.

                                                                                                You allude to this omission when you say “None of these carry any meaning about the technical problem” and justify it by saying “they’re just expressions of anger.”. The assumption here is that anger and frustration are not valid feelings to express in this context, probably because you regard interactions on the LKML as part of a professional, corporate setting, and in the beginning you say:

                                                                                                If you insult people in professional interactions, you’ll find yourself increasingly alienated and excluded simply because people don’t like being insulted!

                                                                                                But that makes me wonder… we’re talking about Linus Torvalds here, who has been doing that exact thing for decades, on public mailing lists, for all the world to see, including a quarrel with a world-renowned professor when he was still a student himself. And while that does earn him some occasional backlash, I think he is hardly alienated or excluded by his collaborators; to the contrary, he fostered a community that made his little project… quite the success, one might say.

                                                                                                How come? I agree that insulting people in a corporate environment will usually not end well, so the answer must be that LKML was not always a corporate place, and still is not to the degree that Linus and other prominent maintainers are sticking to their ways, in spite of pressure to assimilate into the culture of the corporations that have embraced Linux development. And this, it seems to me, is at the heart of the matter: from occasionally rough but also playful “hacker culture”, where strong feelings are held and things can get emotional, has emerged something that, for various reasons, big tech firms embrace and engage in. But their cultures eschew having soul in the game and impoliteness is not tolerated, so when actors from these two cultures collaborate, sometimes attitudes will clash and sparks will fly.

                                                                                                Now, I don’t want to convince you that hacker culture is “right” in some way and socially tolerating some anger and insults is a good thing, except for noting, again, that it has some very successful projects to stand for it, otherwise we would not be talking about it. But what really bothers me is the cultural imperialism that I see in posts hating on Linus like yours does. As far as I can tell, you just came across a post with his mail and thought you’d bash on him some for his Bad Character. OK, that is a little unfair, because you made an effort to be constructive in your moralizing, but my point is, you were not involved in this incident, nor, as far as I can tell, any similar ones. Your only reason to engage is to promote your own culture, because it is Right and being angry is Wrong. You’re not content with letting the kernel community work things out on their own, because you know what is Right and Linus is Wrong and has to change. This galls me. At its heart this is the same attitude that led to indigenous cultures being destroyed around the globe. “We’ll show you how it’s done, and we need your land/kernel”.

                                                                                                Human culture and society is a deeply complex topic, and anything we think we know is probably wrong to some degree. Instead of going on crusades, however politely executed, I think we should be striving for tolerance and collaboration and work the inevitable problems out on the ground as they come. Just don’t shoot arrows across the river, please?

                                                                                                1. 7

                                                                                                  I always laugh when people come up with convoluted defenses for C and the effort that goes into that (even writing papers). Their attachment to this language has caused billions if not trillions worth of damages to society.

                                                                                                  All of the defenses that I’ve seen, including this one, boil down to nonsense. Like others, the author calls for “improved C implementations”. Well, we have those already, and they’re called Rust, Swift, and, for the things C is not needed for, yes, even JavaScript is better than C (if you’re not doing systems-programming).

                                                                                                  1. 31

                                                                                                    Their attachment to this language has caused billions if not trillions worth of damages to society.

                                                                                                    Their attachment to a language with known but manageable defects has created trillions if not more in value for society. Don’t be absurd.

                                                                                                    1. 4

                                                                                                      [citation needed] on the defects of memory unsafety being manageable. To a first approximation every large C/C++ codebase overfloweth with exploitable vulnerabilities, even after decades of attempting to resolve them (Windows, Linux, Firefox, Chrome, Edge, to take a few examples.)

                                                                                                      1. 2

                                                                                                        Compared to the widely used large codebase in which language for which application that accepts and parses external data and yet has no exploitable vulnerabilities? BTW: http://cr.yp.to/qmail/guarantee.html

                                                                                                        1. 6

                                                                                                          Your counter example is a smaller, low-featured, mail server written by a math and coding genius. I could cite Dean Karnazes doing ultramarathons on how far people can run. That doesn’t change that almost all runners would drop before 50 miles, esp before 300. Likewise with C code, citing the best of the secure coders doesn’t change what most will do or have done. I took author’s statement “to first approximation every” to mean “almost all” but not “every one.” It’s still true.

                                                                                                          Whereas, Ada and Rust code have done a lot better on memory-safety even when non-experts are using them. Might be something to that.

                                                                                                          1. 2

                                                                                                            I’m still asking for the non C widely used large scale system with significant parsing that has no errors.

                                                                                                            1. 3

                                                                                                              That’s cheating saying “non-c” and “widely used.” Most of the no-error parsing systems I’ve seen use a formal grammar with autogeneration. They usually extract to Ocaml. Some also generate C just to plug into the ecosystem since it’s a C/C++-based ecosystem. It’s incidental in those cases: could be any language since the real programming is in the grammar and generator. An example of that is the parser in Mongrel server which was doing a solid job when I was following it. I’m not sure if they found vulnerabilities in it later.

                                                                                                          2. 5

                                                                                                            At the bottom of the page you linked:

                                                                                                            I’ve mostly given up on the standard C library. Many of its facilities, particularly stdio, seem designed to encourage bugs.

                                                                                                            Not great support for your claim.

                                                                                                            1. 2

                                                                                                              There was an integer overflow reported in qmail in 2005. Bernstein does not consider this a vulnerability.

                                                                                                          3. 3

                                                                                                            That’s not what I meant by attachment. Their interest in C certainly created much value.

                                                                                                          4. 9

                                                                                                            Their attachment to this language has caused billions if not trillions worth of damages to society.

                                                                                                            Inflammatory much? I’m highly skeptical that the damages have reached trillions, especially when you consider what wouldn’t have been built without C.

                                                                                                            1. 12

                                                                                                              Tony Hoare, null’s creator, regrets its invention and says that just inserting the one idea has cost billions. He mentions it in talks. It’s interesting to think that language creators even think of the mistakes they’ve made have caused billions in damages.

                                                                                                              “I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.

                                                                                                              If the billion dollar mistake was the null pointer, the C gets function is a multi-billion dollar mistake that created the opportunity for malware and viruses to thrive.

                                                                                                              1. 2

                                                                                                                He’s deluded. You want a billion dollar mistake: try CSP/Occam plus Hoare Logic. Null is a necessary byproduct of implementing total functions that approximate partial ones. See, for example, McCarthy in 1958 defining a LISP search function with a null return on failure. http://www.softwarepreservation.org/projects/LISP/MIT/AIM-001.pdf

                                                                                                                1. 3

                                                                                                                  “ try CSP/Occam plus Hoare Logic”

                                                                                                                  I think you meant formal verification, which is arguable. They could’ve wasted a hundred million easily on the useless stuff. Two out of three are bad examples, though.

                                                                                                                  Spin has had a ton of industrial success easily knocking out problems in protocols and hardware that are hard to find via other methods. With hardware, the defects could’ve caused recalls like the Pentium bug. Likewise, Hoare-style logic has been doing its job in Design-by-Contract which knocks time off debugging and maintenance phases. The most expensive. If anything, not using tech like this can add up to a billion dollar mistake over time.

                                                                                                                  Occam looks like it was a large waste of money, esp in the Transputer.

                                                                                                                  1. 1

                                                                                                                    No. I meant what I wrote. I like spin.

                                                                                                                2. 1

                                                                                                                  Note what he does not claim is that the net result of C’s continued existence is negative. Something can have massive defects and still be an improvement over the alternatives.

                                                                                                                3. 7

                                                                                                                  “especially when you consider what wouldn’t have been built without C.”

                                                                                                                  I just countered that. The language didn’t have to be built the way it was or persist that way. We could be building new stuff in a C-compatible language with many benefits of HLL’s like Smalltalk, LISP, Ada, or Rust with the legacy C getting gradually rewritten over time. If that started in the 90’s, we could have equivalent of a LISP machine for C code, OS, and browser by now.

                                                                                                                  1. 1

                                                                                                                    It didn’t have to, but it was, and it was then used to create tremendous value. Although I concur with the numerous shortcomings of C, and it’s past time to move on, I also prefer the concrete over the hypothetical.

                                                                                                                    The world is a messy place, and what actually happens is more interesting (and more realistic, obviously) than what people think could have happened. There are plenty of examples of this inside and outside of engineering.

                                                                                                                    1. 3

                                                                                                                      The major problem I see with this “concrete” winners-take-all mindset is that it encourages whig history which can’t distinguish the merely victorious from the inevitable. In order to learn from the past, we need to understand what alternatives were present before we can hope to discern what may have caused some to succeed and others to fail.

                                                                                                                      1. 2

                                                                                                                        Imagine if someone created Car2 which crashed 10% of the time that Car did, but Car just happened to win. Sure, Car created tremendous value. Do you really think people you’re arguing with think that most systems software, which is written in C, is not extremely valuable?

                                                                                                                        It would be valuable even if C was twice as bad. Because no one is arguing about absolute value, that’s a silly thing to impute. This is about opportunity cost.

                                                                                                                        Now we can debate whether this opportunity cost is an issue. Whether C is really comparatively bad. But that’s a different discussion, one where it doesn’t matter that C created value absolutely.

                                                                                                                  2. 8

                                                                                                                    C is still much more widely used than those safer alternatives, I don’t see how laughing off a fact is better than researching its causes.

                                                                                                                    1. 10

                                                                                                                      Billions of lines of COBOL run mission-critical services of the top 500 companies in America. Better to research the causes of this than laughing it off. Are you ready to give up C for COBOL on mainframes or you think both of them’s popularity were caused by historical events/contexts with inertia taking over? Im in latter camp.

                                                                                                                      1. 7

                                                                                                                        Are you ready to give up C for COBOL on mainframes or you think both of them’s popularity were caused by historical events/contexts with inertia taking over? Im in latter camp.

                                                                                                                        Researching the causes of something doesn’t imply taking a stance on it, if anything, taking a stance on something should hopefully imply you’ve researched it. Even with your comment I still don’t see how laughing off a fact is better than researching its causes.

                                                                                                                        You might be interested in laughing about all the cobol still in use, or in research that looks into the causes of that. I’m in the latter camp.

                                                                                                                        1. 5

                                                                                                                          I think you might be confused at what I’m laughing at. If someone wrote up a paper about how we should continue to use COBOL for reasons X, Y, Z, I would laugh at that too.

                                                                                                                          1. 3

                                                                                                                            Cobol has some interesting features(!) that make it very “safe”. Referring to the 85 standard:

                                                                                                                            X. No runtime stack, no stack overflow vulnerabilities
                                                                                                                            Y. No dynamic memory allocation, impossible to consume heap
                                                                                                                            Z. All memory statically allocated (see Y); no buffer overflows
                                                                                                                            
                                                                                                                            1. 3

                                                                                                                              We should use COBOL with contracts for transactions on the blockchains. The reasons are:

                                                                                                                              X. It’s already got compilers big businesses are willing to bet their future on.

                                                                                                                              Y. It supports decimal math instead of floating point. No real-world to fake, computer-math conversions needed.

                                                                                                                              Z. It’s been used in transaction-processing systems that have run for decades with no major downtime or financial losses disclosed to investors.

                                                                                                                              λ. It can be mathematically verified by some people who understand the letter on the left.

                                                                                                                              You can laugh. You’d still be missing out on a potentially $25+ million opportunity for IBM. Your call.

                                                                                                                              1. 1

                                                                                                                                Your call.

                                                                                                                                I believe you just made it your call, Nick. $25+ million opportunity, according to you. What are you waiting for?

                                                                                                                                1. 4

                                                                                                                                  You’re right! I’ll pitch IBM’s senior executives on it the first chance I get. I’ll even put on a $600 suit so they know I have more business acumen than most coin pitchers. I’ll use phrases like vertical integration of the coin stack. Haha.

                                                                                                                            2. 4

                                                                                                                              That makes sense. I did do the C research. Ill be posting about that in a reply later tonight.

                                                                                                                              1. 10

                                                                                                                                Ill be posting about that in a reply later tonight.

                                                                                                                                Good god man, get a blog already.

                                                                                                                                Like, seriously, do we need to pass a hat around or something? :P

                                                                                                                                1. 5

                                                                                                                                  Haha. Someone actually built me a prototype a while back. Makes me feel guilty that I dont have one instead of the usual lazy or overloaded.

                                                                                                                                    1. 2

                                                                                                                                      That’s cool. Setting one up isn’t the hard part. The hard part is doing a presentable design, organizing the complex activities I do, moving my write-ups into it adding metadata, and so on. I’m still not sure how much I should worry about the design. One’s site can be considered a marketing tool for people that might offer jobs and such. I’d go into more detail but you’d tell me “that might be a better fit for Barnacles.” :P

                                                                                                                                      1. 3

                                                                                                                                        Skip the presentable design. Dan Luu’s blog does pretty well it’s not working hard to be easy on the eyes. The rest of that stuff you can add as you go - remember, perfect is the enemy of good.

                                                                                                                                        1. 0

                                                                                                                                          This.

                                                                                                                                          Hell, Charles Bloom’s blog is basically an append-only textfile.

                                                                                                                                        2. 1

                                                                                                                                          ugh okay next Christmas I’ll add all the metadata, how does that sound

                                                                                                                                          1. 1

                                                                                                                                            Making me feel guilty again. Nah, I’ll build it myself likely on a VPS.

                                                                                                                                            And damn time has been flying. Doesnt feel like several months have passed on my end.

                                                                                                                                  1. 1

                                                                                                                                    looking forward to read it:)

                                                                                                                            3. 4

                                                                                                                              Well, we have those already, and they’re called Rust, Swift, ….

                                                                                                                              And D maybe too. D’s “better-c” is pretty interesting, in my mind.

                                                                                                                              1. 3

                                                                                                                                Last i checked, D’s “better-c” was a prototype.

                                                                                                                              2. 5

                                                                                                                                If you had actually made a serious effort at understanding the article, you might have come away with an understanding of what Rust, Swift, etc. are lacking to be a better C. By laughing at it, you learned nothing.

                                                                                                                                1. 2

                                                                                                                                  the author calls for “improved C implementations”. Well, we have those already, and they’re called Rust, Swift

                                                                                                                                  Those (and Ada, and others) don’t translate to assembly well. And they’re harder to implement than, say, C90.

                                                                                                                                  1. 3

                                                                                                                                    Is there a reason why you believe that other languages don’t translate to assembly well?

                                                                                                                                    It’s true those other languages are harder to implement, but it seems to be a moot point to me when compilers for them already exist.

                                                                                                                                    1. 1

                                                                                                                                      Some users of C need an assembly-level understanding of what their code does. With most other languages that isn’t really achievable. It is also increasingly less possible with modern C compilers, and said users aren’t very happy about it (see various rants by Torvalds about braindamaged compilers etc.)

                                                                                                                                      1. 4

                                                                                                                                        “Some users of C need an assembly-level understanding of what their code does.”

                                                                                                                                        Which C doesnt give them due to compiler differences and effects of optimization. Aside from spotting errors, it’s why folks in safety- critical are required to check the assembly against the code. The C language is certainly closer to assembly behavior but doesnt by itself gives assembly-level understanding.

                                                                                                                                  2. 2

                                                                                                                                    So true. Every time I use the internet, the solid engineering of the Java/Jscript components just blows me away.

                                                                                                                                    1. 1

                                                                                                                                      Everyone prefers the smell of their own … software stack. I can only judge by what I can use now based on the merits I can measure. I don’t write new services in C, but the best operating systems are still written in it.

                                                                                                                                      1. 5

                                                                                                                                        “but the best operating systems are still written in it.”

                                                                                                                                        That’s an incidental part of history, though. People who are writing, say, a new x86 OS with a language balancing safety, maintenance, performance, and so on might not choose C. At least three chose Rust, one Ada, one SPARK, several Java, several C#, one LISP, one Haskell, one Go, and many C++. Plenty of choices being explored including languages C coders might say arent good for OS’s.

                                                                                                                                        Additionally, many choosing C or C++ say it’s for existing tooling, tutorials, talent, or libraries. Those are also incidental to its history rather than advantages of its language design. Definitely worthwhile reasons to choose a language for a project but they shift the language argument itself implying they had better things in mind that werent usable yet for that project.

                                                                                                                                        1. 4

                                                                                                                                          I think you misinterpreted what I meant. I don’t think the best operating systems are written in C because of C. I am just stating that the best current operating system I can run a website from is written in C, I’ll switch as soon as it is practical and beneficial to switch.

                                                                                                                                          1. 2

                                                                                                                                            Oh OK. My bad. That’s a reasonable position.

                                                                                                                                            1. 3

                                                                                                                                              I worded it poorly, I won’t edit though for context.