Threads for PhantomZorba

    1. 9

      Several C drivers already have this exact problem because actually it’s supremely difficult to get the requirements right here. If those C drivers actually asserted that precondition, there’d be a lot more crashes!

      1. 8

        Changing that API precondition is a separate discussion from adding a new driver.

        A small improvement to drm_sched to loosen preconditions such that bugs are fixed across many existing drivers seems perfectly reasonable to me. No one is getting hung up on any kind of “C way” or “Rust way”, they’re getting hung up on objectively poor design.

        The determination of which way is better poses no technical barrier to making the Rust API safe and convenient for drivers in Rust.

        If existing C drivers can’t use the interface safely, why do it’s reasonable to expect the Rust interface to do it? The reason this whole problem came up for the AGX driver is because it has different requirements than existing drivers due to the design of the hardware. It’s a problem for all drivers, but it’s 100x more visible with the AGX driver.

        In any case, this is all moot now, as Lina will simply make a new scheduler that isn’t a total mess.

        1. 6

          Who is this “Marcan” person, why are you using that name instead of “Lina”?

          1. 11

            It’s a long-running harassment campaign and it’s unwelcome here.

            1. 5

              marcan is Hector Martin, https://social.treehouse.systems/@marcan, who seems to be the lead on the whole port of Linux. (I’m not deeply familiar with this part of the ecosystem so I may not be 100% right on this.)

        2. 14

          The fundamental absurdity of how the OOM killer works is strong evidence that overcommit and maybe even disk-backed virtual memory are fundamentally broken user-visible features at the minimum. Pretending like resources are there that aren’t there isn’t a good way to build robust systems.

          I feel like system designers in the 80s jumped head first into virtual memory while hand-waving or before fully thinking through all the robustness consequences it would have. Just because we can build systems that can simulate having more memory than they do doesn’t mean we should. This type of “we built cool tech, now let’s use it without considering the consequences” is so pervasive in the non mission critical computing world.

          1. 8

            Pretending like resources are there that aren’t there isn’t a good way to build robust systems

            I strongly disagree. I wrote a lot more last time, but the TL;DR: Robustness is built by handling failure gracefully and handling failure at a coarse granularity is far easier than handling it at a fine granularity. An approach that requires handling it at the fine granularity make the probability of needing to handle it higher, which increases the chance of getting it wrong.

            1. 6

              Reading your linked comment, it seems we have a pollution problem here:

              1. Indeed in practice, most programs don’t handle allocation failures gracefully. They’ll just dereference a NULL pointer and crash on the subsequent SEGV with no explanation or even stack trace.
              2. It thus makes sense to try and make allocation failures as rare as possible.
              3. Why bother handling allocation failures if they’re so rare?
              4. Now even programs that could have handled allocation failures can’t, because the kernel lied to them.

              One could argue that desktop & server OSes are not meant to handle critical systems that need to be absolutely certain that the memory they’ve been given is real (the same could be said of real time). Which wouldn’t be too bad if we had any choice. But since outside of extremely constrained environments we only have 3 OS kernels on the planet (we can probably ignore anything that isn’t NT, Linux, or BSD), we end up trying to use them for more than what they’re actually good for.

              These days we can’t even ship a video game, and guarantee with a particular hardware configuration that it will run at this many FPS with no stuttering or crashes. It was possible on older computers and consoles, but somehow as our machines became orders of magnitude more powerful we lost that ability.

              I’m not sure how exactly we should go about it. There’s probably no practical path beyond solving the 30 million lines problem (which most probably require splitting hardware and software activities in separate companies), but at the very least, it would be nice if we could have reliable blame: if something goes wrong, I want to know who is responsible. And the easiest way I can think of is to shift the blame to programs as much as possible.

              It would likely require technical error messages like “Sorry, Discord required more memory than the system could spare (currently 300MB). (Note: you currently have 63 programs running, total available RAM is 8GiB)”. I don’t know how someone used to “Oops, something unexpected happened” would react though.

              1. 5

                But since outside of extremely constrained environments we only have 3 OS kernels on the planet (we can probably ignore anything that isn’t NT, Linux, or BSD), we end up trying to use them for more than what they’re actually good for.

                Keep in mind that Windows doesn’t do overcommit. We don’t have to go and look at a hypothetical alternate universe of non-UNIX things to see what systems without overcommit look like, there’s a real non-UNIX world without overcommit right there. And it’s absolutely awful to work with.

                It would likely require technical error messages like “Sorry, Discord required more memory than the system could spare (currently 300MB). (Note: you currently have 63 programs running, total available RAM is 8GiB)”. I don’t know how someone used to “Oops, something unexpected happened” would react though.

                Most early ’90s operating systems would give errors like that. RiscOS had a nice GUI for dynamically configuring the memory available to different things. Oh, the fun we had making sure that the lines were just long enough that programs could launch.

                NT just reports failure in VirtualAlloc and that’s typically propagated as an SEH exception. And then a program crashes with an out-of-memory error. And then you look at Task Manager and see that you have 60 GiB of RAM free and go ‘huh?’.

                1. 4

                  there’s a real non-UNIX world without overcommit right there. And it’s absolutely awful to work with.

                  Something I don’t understand, is the huge disconnect between UNIX users and quite a number of Windows developers (especially game devs): each camp says the other OS is significantly worse to work with, and both seem to be ignorant of quite a few things the other has to offer, it’s really weird. (Disclaimer: I’ve never developed for Windows directly, the closest I ever got was using Qt.)

                  One important thing where Windows seems to have the upper hand is ABI stability: even though the Linux kernel seems to be the king here (“We never break users!!!”), Linux distributions seem to have a hard time running most GUI applications that have been compiled more than a few years ago. And I wouldn’t know how to compile a binary that would work on most distros, especially a 3D multiplayer game. I wouldn’t be surprised if it required bypassing the distro entirely and run inside a Docker container or similar.


                  RiscOS had a nice GUI for dynamically configuring the memory available to different things.

                  Crap, I envisioned exactly this as one possible solution, it is indeed a usability nightmare.

                  Here’s the thing though: over the years we’ve observed programs being more and more resource hungry, to a point where it cannot possibly be ethically justified — not even by the additional functionality and better graphics we got. Clearly application devs got away with this. This is bad, and I want this to stop. Unfortunately I don’t have any good solution right now.

                  NT just reports failure in VirtualAlloc and that’s typically propagated as an SEH exception. And then a program crashes with an out-of-memory error. And then you look at Task Manager and see that you have 60 GiB of RAM free and go ‘huh?’.

                  Dear Lord, I’d take overcommit over that any day.

                  1. 3

                    One important thing where Windows seems to have the upper hand is ABI stability:

                    Yes and no. UNIX comes from the same tradition as C, whereas Windows was designed to be language agnostic. This means that none of the system data types are standard C types and the core APIs can be used from C, Pascal, and other things.

                    Core libraries use this, and higher-level ones use COM. There is no platform C++ ABI and Visual Studio changes the C++ ABI periodically independent of Windows releases. MinGW supports, I think, three different C++ ABIs. In contrast, for the last 20 years or so, *NIX platforms have all used the Itanium C++ ABI and had a stable C++ ABI.

                    In terms of system APIs built on top of these low-level ABIs, Windows is definitely better. Qt, GTK, and so on break their interfaces far more often than Win32. That’s not to say that Win32 is prefect. I have more success running older Windows programs on WINE on macOS than I do on newer Windows.

                    There are some things Windows does well, but a lot of the system was aggressively optimised around machines with 8-16 MiB of RAM and has kept those core abstractions. It is to desktop operating systems today what Symbian was to mobile operating systems in 2007: lots of good solutions to the wrong set of problems.

              2. 2

                The concept of trading off fine grain error handling for course grain error handling may be useful in specific scenarios but it does not work here. For one, these course grain errors aren’t being handled in practice in the vast majority of cases. The only reason systems with OOM killers seem to work is because the systems are usually so over provisioned with RAM that they never actually operate in an overcommitted state. This makes these systems a ticking time bomb.

                Wholesale ignoring of these course grained memory exhaustion errors is tolerated because software malfunctioning in non mission critical contexts in general is tolerated as long as it doesn’t happen too often. The downside is that software in non mission critical contexts becomes known to be critically unreliable and users of this type of software, otherwise known as consumers, start to accumulate erratic behaviors to deal with the risk of unpredictable failure, e.g. pressing save after every sentence written, even when auto save is on.

                From a pure system design standpoint, admitting overcommit and OOM killing changes the process abstraction in a substantial way. Now you must account for your program dying randomly through no fault of its own if you want your program to be correct. It’s absurd to engineer a program under these hostile circumstances. The implication here is that every program must have a process monitor to handle random deaths. Oh wait, but now the process monitor must have a process monitor, ad infinitum. Okay then let’s add a special case for the root process monitor. Hmm this asymmetry is kind of smelly, maybe it would be simpler to not have programs randomly crash through no fault of their own?

                If we must sacrifice fork() and mmap() to get a sane process abstraction and deterministic operation under resource exhaustion, so be it.

                1. 6

                  The only reason systems with OOM killers seem to work is because the systems are usually so over provisioned with RAM that they never actually operate in an overcommitted state. This makes these systems a ticking time bomb.

                  Meanwhile, on the Windows desktop I used at Microsoft (which didn’t have overcommit), allocations would start to fail when memory usage was at 60% and programs would crash with unhandled exceptions from allocation failures.

                  This isn’t some hypothetical alternate reality, the most popular desktop operating system does the opposite thing and it doesn’t work well at all.

                  Wholesale ignoring of these course grained memory exhaustion errors is tolerated because software malfunctioning in non mission critical contexts in general is tolerated as long as it doesn’t happen too often.

                  And that’s how you build reliable systems. Software cannot be 100% reliable because computers cannot be 100% reliable.

                  From a pure system design standpoint, admitting overcommit and OOM killing changes the process abstraction in a substantial way. Now you must account for your program dying randomly through no fault of its own if you want your program to be correct

                  As you must for any non-trivial program even without overcommit. Unless your program is 100% bug free (including all of its dependencies) it will crash at some point. Again, you build reliable systems by letting things fail and recover.

                  Apple systems handle memory very well because the kernel supports Sudden Termination. Processes advertise that they are at a point where they have no unsaved data and so can be killed with no cleanup. This lets them use memory far more efficiently than Windows.

                  1. 3

                    Meanwhile, on the Windows desktop I used at Microsoft (which didn’t have overcommit), allocations would start to fail when memory usage was at 60% and programs would crash with unhandled exceptions from allocation failures.

                    The difference is that these are identifiable errors in the program with stack traces. This can be addressed and fixed like any other bug. Non-deterministic OOM deaths cannot.

                    Software cannot be 100% reliable because computers cannot be 100% reliable.

                    As you must for any non-trivial program even without overcommit. Unless your program is 100% bug free (including all of its dependencies) it will crash at some point.

                    I think these statements muddy the waters between process death per-spec and process death due to malfunction or error. There is useful concept known as “normal operation.” Something that is specified to happen is something that I must build machinery to handle. Something that is not specified to happen is something that I do not need to build machinery to handle. If OOM deaths are specified to happen (and they are) then that means that user-space is responsible for handling random death, yet it’s common knowledge that close to 99% of software that exists doesn’t handle that situation. The implication is that you are saying nearly all software is incorrect. Is that right?

                    The problem is that while it may be that case that nearly all user-level software is critically unreliable due to not handling random OOM death, at those high rates of non-compliance I think it’s reasonable to say that de facto it’s not the user-level software that is wrong but the OS. At some point the OOM killer spec was invisibly implemented and close to 0% of programmers got the memo.

                    I would just add that placing this burden on user-space makes a system that never actually runs your program and always crashes it a valid system according to spec, which is the absurd logical conclusion to which I was hinting. Compare this to a system that is specified to only return memory errors when there is no memory left.

                    Apple systems handle memory very well because the kernel supports Sudden Termination. Processes advertise that they are at a point where they have no unsaved data and so can be killed with no cleanup. This lets them use memory far more efficiently than Windows.

                    While the efficiency gains may be true, the only reason that this works is that this feature is broadly advertised and application developers are designing against this spec. As I said earlier, this is simply not the case on BSD, Linux, macOS, et. al. I want to note that according to the arguments you are making above, this feature is misleading since programs should always be in state where they can be killed with no cleanup, since computers are not 100% reliable.

                    1. 3

                      The implication is that you are saying nearly all software is incorrect. Is that right?

                      I’m uncomfortable with overcommit too (indeed I tend to disable it in Linux), but… isn’t it accepted as true that nearly all software, or at least nearly all complex software not formally verified, is incorrect?

                      1. 2

                        It’s probably a safe bet that every complex piece of software is not 100% correct and there is at least one subtle issue lurking somewhere in there. That said, I think there is a meaningful distinction between a subtle bug and a blatantly ignored condition that is specified to happen.

                    2. 2

                      I will grant that the Windows experience sounds horrible (having no firsthand dev experience on Windows myself) but do we necessarily know that it is caused specifically by overcommit, or could it be some other bad aspect of memory allocation policy or design? It seems a stretch to say, Windows fully commits, Windows is terrible, therefore fully committing is terrible.

                2. 5

                  system designers in the 80s jumped head first into virtual memory

                  Virtual memory dates back to the Manchester / Ferranti Atlas computer designed around 1960, and it became common in other large systems in the next few years.

                  1. 1

                    That’s a fair point. I was implicitly referring to virtual memory adoption and spread in Unix and related systems, e.g. Mach. From here is where I see the main lineage of contemporary widespread systems. Maybe considered by some as our current OS monoculture.

                  2. 2

                    I feel like system designers in the 80s jumped head first into virtual memory while hand-waving or before fully thinking through all the robustness consequences it would have.

                    Bare in mind that they didn’t have as much resources that we have today, so I still believe went this path knowingly.

                    1. 1

                      In practice though, do you often suffer when the OOM killer strikes?

                      I don’t use Linux on my laptop, but at ${dayjob} we have a bunch of Linux servers, and we never had a problem with it AFAICT.

                      1. 5

                        I’ve seen it happen on servers, where it would randomly decide to kill exactly the wrong process. Especially when running “standard” somewhat badly behaved software like Drupal (which can be very memory hungry).

                        1. 1

                          Considering what David C wrote in other comments regarding the challenges of handling memory failures with fine granularity – would it not then make sense to try to make improvements at the application level? Can Drupal/the application be rewritten to become less memory-hungry? If that’s not practical or takes too long time, then memory is relatively cheap these days – one could also increase the memory available to the application.

                          1. 3

                            Sure, but when you’re hosting websites with a standard package, there’s a tension between how much hardware you can throw at it and how much you can charge the customers.

                            It’s actually quite a shitty place to be in - either you host yourself and have to increase the hardware and hopefully the customers don’t run away because hosting gets too expensive, or they host elsewhere and their application gets killed or is too slow and they come complaining at you.

                            I’m glad I’m not working in a place where we use these overblown CMSes/“frameworks” anymore.

                            1. 3

                              Most *NIX systems, rather than having an OOM killer, will cause page faults when you try to access memory that you’ve mmap’d but where there aren’t available pages. This tends to work quite well with things like Drupal. The php-fpm process takes a segfault, crashes, and is automatically restarted. You return a 50x error for one request and then the PHP runtime is restarted with the minimum memory required and can grow again.

                              The overall system is then fairly reliable.

                      2. 24

                        Oh no! Now systems people who collaborate with verification people need a new joke!

                          1. 39

                            Systems people understand the value of reuse and testing. Once you have a joke that works, you can reuse it in every applicable context. Requiring us to come up with a new joke may require significant additional development and testing time.

                            1. 24

                              Let’s be honest: there has been plenty of testimonies from academics that the original name entailed inappropriate behavior ranging from bad jokes to harassment. I hope this renaming will contribute to reduce – at least a bit – the leaky pipeline in this area of CS.

                              1. 12

                                there has been plenty of testimonies from academics that the original name entailed inappropriate behavior ranging from bad jokes to harassment.

                                That’s a shame. I’ve not seen that first hand, it’s just been a good way of telling who on a project is a verification person and who isn’t: when someone announces that they like Coq, the systems people are the ones obviously trying really hard not to smirk, the verification people are the ones that nod (or say that they prefer some other theorem prover).

                                1. 51

                                  I would have just flagged this and moved on, but I feel that I need to take a moment to clarify something. This isn’t just about “not letting people have ‘fun’”, this is the kind of thing that happens when our industry is being taken more seriously at a societal and institutional level. This name was long overdue to be replaced because it is fundamentally a penis joke. These jokes are “fun”, but they don’t really have a place in professional work environments because “jokes” aren’t always funny to all parties in the vicinity of that joke.

                                  I feel that simplifying this down to “No fun allowed” is a misinformed take, and I would really rather not see such things on here as I feel they aren’t conducive to the desired environment of this community.

                                  1. 10

                                    This name was long overdue to be replaced because it is fundamentally a penis joke.

                                    This is not correct. Recalling that the software originated in France, here’s Wikipedia:

                                    The name “Coq” is a wordplay on the name of Thierry Coquand, Calculus of Constructions or “CoC” and follows the French computer science tradition of naming software after animals (coq in French meaning rooster).

                                    I’m not saying it wasn’t time for a rename, but fundamentally it had nothing to do with cocks of the American type.

                                    1. 19

                                      This is not not correct. The oral history around this name is that the person who made the decision (Gérard Huet) was well aware that it would lead to penis jokes. I’m not sure why you would expect to see this slightly embarrassing fact explicitly mentioned on Wikipedia.

                                      1. 4

                                        Perhaps this is right; I had never heard this “oral” history before today.

                                        In any case, I have some bad news about queues.

                                        1. 15

                                          I can also confirm having heard this from someone deep in the Coq community.

                                          1. 4

                                            First of all, I am totally in favor of this change. But, where is the push to change the word “byte”, which sounds just like “bite”, French slang for penis? My understanding of the oral history of the name is that it was intended as a slight rebellion against the naming of byte.

                                            Otherwise, this all smacks of hypocrisy/american cultural imperialism/exceptionalism.

                                            Maybe I’m wrong. I’m not French. But if you know the oral history, you should also know that.

                                            (Of course, it’s not as easy to change. Doesn’t mean it’s not hypocritical to omit the full context)

                                            1. 6

                                              The French term for byte is “octet”, for precisely the reason that byte is homophonic with the slang term for penis.

                                              1. 5

                                                Maybe I’m wrong. I’m not French.

                                                The word that sounds like French “bite” is actually bit, not byte. Since we (French speakers) gladly use the word (it’s the only one we have for that concept), I wouldn’t take it as an example of cultural imperialism. It’s just a good word, short and easy to pronounce.

                                                1. 3

                                                  Ah, that’s what I had actually thought I had heard, but then I looked up french slang and saw “bite”; I should have thought about it would sound w/ french pronunciation.

                                                  So does ‘bit’ not create uncomfortable situations as did Coq? And this is not actually a social issue for francophones? Genuine question here; if it does, we should change it.

                                                  1. 4

                                                    Yes it does indeed create uncomfortable situations. As do “queue” (as @gamache mentioned) which is also another slang for the same thing (as well as the word for “tail”).

                                                    1. 1

                                                      so… it seems like the point is valid? that it was started as a form a cheeky form of protest, and we’re ignoring what it was protesting?

                                            2. 6

                                              I don’t mean to insist but I am negatively surprised by the confidence with which you stated something, in your earlier post, about a topic you know nothing about. I mean, someone made a comment that is arguably factually correct, and you cited Wikipedia at them (in a way that is not actually an argument: the fact that something is not discussed on Wikipedia does not say anything about whether it is true or false) and stated that (emphasis yours) “fundamentally it has nothing to do with cocks of the American type”, which is an awfully confident way to state something factually incorrect.

                                              I don’t expect an apology or anything, but my take away is that some people here sure sound very confident about things that they don’t know about, even when they reply to well-formed, nuanced, valid takes on things. I am not used to this level of discourse on Lobsters. I suspect that this whole discussion is not bringing the best out of us.

                                              1. 4

                                                If you look at the mailing list discussion debating the change you’ll see that people were well aware of the double entendre when the name was chosen. The main debate is if they choose it because of this explicitly or if it was an added bonus.

                                                There’s no place for such names in our profession if we want to be inclusive. Good to see it go like NIPS a few years ago

                                            3. 8

                                              Cock actually also just meant rooster originally.

                                              1. 5

                                                It’s associated with a penis joke in some contexts, indeed a significant fraction of the cases in which it would be a useful tool. And as a tool otherwise fit for professional and academic use, it would like to be unencumbered by that association in those contexts.

                                              2. 3

                                                What would this even be flaggable for? The user isn’t trolling or being unkind, you just disagree and think the joke isn’t fun.

                                                1. 1

                                                  It’s a contentless quip that flippantly dismisses the issues raised in the post it’s responding to, namely inappropriate jokes and harassment (implicitly: mostly towards women).

                                                  1. 2

                                                    Yes, but flagging isn’t “I don’t like this post.” It’s not a downvote.

                                                    1. 2

                                                      https://lobste.rs/about#flags

                                                      For comments, these are: “Off-topic” for drifting into meta or topics that aren’t related to the story; “Me-too” when a comment doesn’t add new information, typically a single sentence of appreciation, agreement, or humor; “Troll” for derailing conversations into classic arguments, well-intentioned or not; “Unkind” when uncharitable, insulting, or dismissive; and “Spam” for promoting commercial service

                                                      On the face of it, I think the comment in question meets the criteria for every comment flag reason except “Spam”.

                                                  2. 5

                                                    The name is fundamentally a French word.

                                                    1. 4

                                                      The name is fundamentally a French word.

                                                      Is that it’s a French word ‘fun’? The commenter above said “No fun allowed!”, implying they think that keeping the name is fun. I interpret that as that person thinking the pun with English as being fun. And I interpret cadey’s reply in that context.

                                                    2. 5

                                                      This is imperialist action in defense of either protestant values or the English language in computer science. It’s either “no fun allowed” or “English will be your primary mode of operation”.

                                                      1. 4

                                                        If a name was slang for a private body part in Spanish or Arabic or any other language, it would be off the table as well.

                                                        (Almost all software projects have names that aren’t slang for private body parts in any language, so this is not a particularly high bar to cross.)

                                                        1. 15

                                                          The standard you propose has long-since failed; the typical example in this context is the French slang “bite”, pronounced like English “bit”.

                                                            1. 1

                                                              That isn’t the name of a software project.

                                                  3. 2

                                                    Once you have a joke that works, you can reuse it in every applicable context. Requiring us to come up with a new joke may require significant additional development and testing time.

                                                    So… is this comment an old, proven joke, or a new joke that you’re testing? :-)

                                                2. 6

                                                  I hope the move goes smoothly, wouldn’t want them to get stuck between a rocq and a hard place

                                                  1. 3

                                                    I am truly sorry, for both of them.

                                                  2. 19

                                                    With a name like “Rust Leadership Council” it seems like the Rust project is falling into terminal bureaucratic decay. Too many cooks are spoiling the broth.

                                                    I’m not an expert in how social structures like these evolve but in my experience these sorts of outcomes seem to happen more often in the absence of the original figurehead or main stakeholder. Without someone like that, who breaks the ties when two people disagree? Whose opinion do people defer to in the face of ambiguity?

                                                    1. 33

                                                      The Rust project has always made decisions collaboratively via RFC and discussion. There was a post here not long ago from the actual “original figurehead” where they commented on things they wanted that didn’t happen, and admitted that Rust is better off for not having followed their original vision. It’s literally titled “The Rust I Wanted Had No Future.”

                                                      1. 3

                                                        One can interpret that post from Graydon Hoare in multiple ways. I think there were some great ideas in there, and I really wish Rust had adopted some of them in the past. It feels too late for any major changes in the language right now, so the Rust we have is the one we’re stuck with.

                                                        Maybe it’s just my biased view, but what I read between the lines in that post (and also when taken in the context of his not-so-kind words from his previous one on BDFLs and governance) is that he’s not really satisfied with the state of current Rust. He just seems to be a sensible person who wouldn’t want to make any big claims and fuel the drama, so it was phrased like that, and less as a direct critique.

                                                        Feels like the message wasn’t so much directed at Rustaceans, but future language designers instead, and I hope some of these ideas will become foundational to newer languages.

                                                        I was really surprised at how many people read that post and just commended him from stepping away with his “wrong ideas”, without much of a discussion about that alternative Rust he was suggesting.

                                                        1. 4

                                                          What’s your point exactly? My post is made under the premises that you highlight. In the presence of Rust’s historically collaborative process and the withdrawal of its potential BDFL, it finds itself in the current politically-ridden and bureaucratic situation.

                                                          1. 34

                                                            My point is that Rust project isn’t falling into “terminal bureaucratic decay” because this is how the project has always worked. This is an adjustment to an organizational model that has already succeeded in bringing us the Rust we have today. If you think there ever was some BDFL who is definitely responsible for Rust’s success, you are wrong.

                                                            1. 6

                                                              The point is that almost no meaningful decisions come down to “two people disagree”. If two people feel strongly about disagreeing, the question probably merits an RFC.

                                                          2. 17

                                                            I feel like this doesn’t reflect how Rust actually works. The high order bit is that the work is done by the teams, not by Core/Foundation/Leadership Council, which have relatively little input into the actual technical artifacts.

                                                            In other words, the “politics” bits of Rust project tend to be much more visible on link aggregators, but they don’t constitute a large fraction of what actually happens. That looks mostly like this: https://this-week-in-rust.org/blog/2023/06/14/this-week-in-rust-499/.

                                                            1. 13

                                                              who breaks the ties when two people disagree?

                                                              Secret meetings and back channels, which is how we got here in the first place.

                                                              1. 7

                                                                In general, this sort of social structure is described by political science. The sort of person you desire is called a dictator. While there is a theory of benevolent dictatorship, most systems require multiple specialists with intentionally diverse views, and a single person is inefficient at best.

                                                                1. 8

                                                                  name

                                                                  So you’re going on about just the name?

                                                                  The creation of this Council marks the end of both the Core Team and the interim Leadership Chat

                                                                  terminal bureaucratic decay

                                                                  Sunset some structures and formalize new ones. Don’t think this is a terminal move in any way.

                                                                  I’m not an expert in how social structures like these evolve

                                                                  Obviously.

                                                                  1. 4

                                                                    Process and groups and names are organizational scars. If you have once made a mistake as an organization, the only way to fix that for eternity is to create a process and make sure there’s a group or individual as a stakeholder until eternity.

                                                                    The deeper the scars the more complicated process. And, from an outside observer perspective, the rust project has certainly gone through some stuff lately…

                                                                    1. 1

                                                                      Whose opinion do people defer to in the face of ambiguity?

                                                                      The council. Which has a representative from every team.

                                                                      1. 9

                                                                        Not really. Council doesn’t make decisions, it can only designate which team should handle the decision:

                                                                        This very important: council isn’t really at the apex of decision making. The buck stops with the relevant team.

                                                                        1. 1

                                                                          Ah, OK. Sorry, I misunderstood.

                                                                    2. 41

                                                                      I am fed up with the Rust hype myself but this is just pointless. “if written by the right people.” Yeah sure, you don’t need memory safety if you have the magical right people who never write make memory errors.

                                                                      1. 42

                                                                        If you have people who have sufficient attention to detail to write memory safe code in C, imagine what they would be able to do if you have them tools that removed that cognitive load and let them think about algorithms and data structures.

                                                                        1. 3

                                                                          Software is prone to many sorts of defects, and I find it quite plausible that there are people who could, pursuant to appropriate external factors, produce software in c with an overall defect rate not far off from what it would be if they wrote it in, say, typescript. (Typescript is quite an odd strawman here, considering that it doesn’t have—and isn’t, to my knowledge, known for having—a particularly strong or expressive type system, but we’ll roll with it.) I might even go so far as to place myself in that category.

                                                                          I do agree with david that, absent such external factors, there are very good reasons to not choose to write applications in c; it is rather absurd that, in a general sense, programming languages are used which are not garbage-collected and capability-safe.

                                                                          1. 6

                                                                            Typescript is quite an odd strawman here, considering that it doesn’t have—and isn’t, to my knowledge, known for having—a particularly strong or expressive type system, but we’ll roll with it.

                                                                            Not strong, perhaps—it’s intentionally unsound in places—but I’d argue that it’s among the most expressive of mainstream languages I’ve used. You can express things like “function that, given an object [dict/hashmap], converts all integer values into strings, leaving other value as-is”, and even do type-level string parsing - very handy for modeling the sorts of metaprogramming shenanigans common in JS libraries.

                                                                          2. 2

                                                                            Not disagreeing, but something to add here.

                                                                            Languages also won’t prevent bad designs, bad performance, hard, unreasonably complicated builds and deployments, bugs (security-related or other kinds), and so on. So you can find projects by “the wrong people” in every language. Sometimes this colors how a language is perceived, especially when there aren’t many widely used applications written in it, or it is simply dominated by few applications.

                                                                            Another thing when it comes for example to C and security. It might depend a lot on context and not just the language itself. First of all, yes, C has a topic with memory safety. I am not here to defend that, but I think it’s a great example where huge numbers of work-arounds and mitigations have emerged in the ecosystem. For example running a service in C written for Windows 98 has strongly different properties than OpenSSH on OpenBSD or something using sandboxing on Linux or OpenBSD and switching libraries, Valgrind, Fuzzers, etc. can make a drastic difference. While that doesn’t make C safe, it does make a difference in the real world and as such it is reasonable to go with a project written in C for security reason when the other option is some hacked together project that might claim to be production ready, but hasn’t been audited and was programmed naively.

                                                                            So in the end it rarely is as black and white and very context dependent. The article could be understood as a language not defining these things on its own.

                                                                            1. 13

                                                                              Languages also won’t prevent bad designs, bad performance, hard, unreasonably complicated builds and deployments, bugs (security-related or other kinds), and so on.

                                                                              No, languages exist exactly to do these things. I’m excluding the extreme case of someone stubbornly writing intentionally terrible code to prove the point. When developers write code with best intentions, the language matters, and it does help or hinder them.

                                                                              We have type systems, so that a “bad programmer” calling a wrong function will get a compilation error before the software is released, instead of shipping buggy code and end users getting random “undefined is not a function”.

                                                                              When languages don’t have robust features for error handling, it’s easy to fail to handle errors, either by accident (you didn’t know a function could return -1) or just by being lazy (if it’s verbose and tedious, it’s less likely to get done). When languages won’t let you ignore errors, then you won’t. Ignoring an exception — On Error Resume Next style — needs to be done actively rather than passively. Rust goes further with Result types that in most cases won’t even compile if you don’t handle them somehow.

                                                                              You can have for-each loops that won’t have off-by-one errors when iterating over whole collections. You can have standard libraries that provide robust fast containers, good implementations of basic algorithms, so that you don’t poorly reinvent your own. Consider how GTA was terribly slow loading for years because of a footgun in a bad standard library combined with a crappy DIYed JSON parser. This error wouldn’t happen in a language with better string functions, and would be a total non-issue in a language where it’s easy to get a JSON parser.

                                                                              It’s much harder to write bad Rust code than to write bad C or bad JS or bash. Good languages make “bad programmers” write better code.

                                                                              1. 3

                                                                                Languages also won’t prevent bad designs, bad performance, hard, unreasonably complicated builds and deployments, bugs (security-related or other kinds), and so on.

                                                                                No, languages exist exactly to do these things. I’m excluding the extreme case of someone stubbornly writing intentionally terrible code to prove the point. When developers write code with best intentions, the language matters, and it does help or hinder them.

                                                                                No, they don’t.

                                                                                • Bad designs are a factor of developer experience
                                                                                • Performance is a factor of algorithms and implementations
                                                                                • Complicated builds are based on design, build tools, libraries, frameworks, also see Bazel, etc.
                                                                                • Deployments pretty much the same thing. I’d agree with builds and deployments language can have a huge influence, which is why containers are often used as a workaround. If we look at C code there is a huge range starting from super simple, single binary to absolutely horrible. So that’s why I don’t think it’s defined through the language.
                                                                                • Bugs are to a large degree ones that can be made in every language. There’s so many bugs that have been done in C, Java, Python, Perl, PHP, Go and Rust. From bad error handling, over SQL injection, library misuse, etc. Sure, if your implementation has memory safety it is it’s own story, but it’s certainly not the only and not the only security critical bug.

                                                                                We have type systems, so that a “bad programmer” calling a wrong function will get a compilation error before the software is released, instead of shipping buggy code and end users getting random “undefined is not a function”.

                                                                                Yes. Yet there were C and C++ and people still came up with Perl, Ruby, PHP which all don’t have memory safety issues.

                                                                                You can have standard libraries that provide robust fast containers, good implementations of basic algorithms, so that you don’t poorly reinvent your own

                                                                                You can also have different implementations, replacements for standard libraries, other algorithms, because it’s not tied to the language.

                                                                                Consider how GTA was terribly slow loading for years because of a footgun in a bad standard library combined with a crappy DIYed JSON parser.

                                                                                This underlines my point though. Despite the language that is usually called fast it’s slow. Despite the language it was made fast. So it was not the language defining the software, but these factors.

                                                                                It’s much harder to write bad Rust code than to write bad C or bad JS or bash. Good languages make “bad programmers” write better code.

                                                                                While I agree with that sentiment, I’d be curious about any studies on that. Way back when I was at university I dug through many studies on such topics. From whether object oriented programming has measurable benefits, to typing, over development methodologies, and so on. In reality they all seem to have a lot less (that is no statistically significant) difference, from bugs to development speed over so many other factors. They always end up boiling down to developer experience and confidence. Unless there were biased authors.

                                                                                I think a great example is PHP. I think a lot of people here will have seen terrible PHP code, and it’s a language I like to hate as well. It’s incredibly easy to write absolutely horrible code in it. Yet the web would be empty without. And even new projects are started successfully. I can’t think of anything good about it really, and I try to stay away from it as much as I can, yet even after decades of better alternatives new projects are frequently and successfully started based on it. And there’s some software in PHP that a lot of people here would recommend and often write articles about here. Nextcloud for example.

                                                                                1. 8

                                                                                  You’re reducing languages to the Turing Tarpit, overlooking human factors, differences in conventions, standards in their ecosystems, and varying difficulty in achieving goals in each language.

                                                                                  The fact that it’s demonstrably possible to screw up an implementation in a language X just as much as in language Y, doesn’t mean they’re equivalent and irrelevant. Even if the extreme worst and extreme best results are possible, it still matters what the typical outcome is, and how much effort it takes. In other words, even when a full range of different outcomes is possible, the language used can be a good Bayesian prior for which outcome you can expect.

                                                                                  Even when the same bugs can be written in different languages, they are not equally likely to be written in every language. This difference in likelihood is very important.

                                                                                  • A mistake of passing too few arguments to a function is a real concern in some languages, and a total non-issue in others.
                                                                                  • Strings with spaces — bane of bash and languages where code is naively glued from text. Total non-issue in most other languages.
                                                                                  • Data races are a real thing to look out for in multi-threaded C and C++ projects, requiring skill and diligence to prevent. In Rust prevention of this particular bug requires almost no skill or diligence, because the compiler does it.

                                                                                  You can also have different implementations, replacements for standard libraries, other algorithms, because it’s not tied to the language

                                                                                  For the record, Rust’s standard library is optional and replaceable. But my point was not about possibilities, but about the common easy cases. In Go you can expect programs to use channels, because they’re easily available. In C they are equally possible to use in the turing-tarpit sense, but in practice much harder to get and use, and this affects how C programs are written. Majority of Go programs use channels, majority of C programs don’t. Both could use channels equally frequently, but they don’t.

                                                                                  Despite the language that is usually called fast it’s slow. Despite the language it was made fast. So it was not the language defining the software, but these factors.

                                                                                  There’s a spread of program speeds and program qualities, and some overlap between languages, but the medians are different. C is slow in this narrow case, but overall program speed is still influenced by the language, e.g. GTA written in pure Python wouldn’t be fast, even if it used all the best designs and all the right algorithms.

                                                                                  And in this case the language choice was the cause of the failure. Languages aren’t just 1-dimensional “fast<>slow”, but have other aspects that affect programs written in them. In this case the other aspects of the language — its clunky standard library and cumbersome handling of dependencies — were the culprit.

                                                                                  While I agree with that sentiment, I’d be curious about any studies on that.

                                                                                  It’s a famously difficult problem to study, so there’s unlikely to be any convincing ones.

                                                                                  In my personal experience: since switching to Rust I did not need to use Valgrind, except when integrating C libraries. In my C work it was my regular tool. My attempts at multi-threading in C were awful, with crashy outcomes in both macOS GCD as well as OpenMP. In Rust I wrote several complex pervasively-multithreaded libraries and services without problems. In Golang I had issues with my programs leaving temp files behind, because I just can’t be trusted to remember to write defer every time. In Rust I never had such bug, because its temp file library has automatic drop guards, so there’s nothing for me to forget. I find pure-Rust programs easy to build and deploy. I use cargo deb, which makes a .deb file with no extra config needed, in production.

                                                                                  A bit less anecdata is https://github.com/rust-fuzz/trophy-case While it demonstrates that Rust programs aren’t always perfect, it also shows how successful Rust is at lowering severity of the bugs. Exploitable memory issues are rare, and majority are panics, which are technically equivalent of exceptions thrown.

                                                                                  BTW, another aspect that language influences is that in Rust you can instrument programs to catch overflows in unsigned arithmetic. C doesn’t distinguish between intended and unintended unsigned overflow, so you can’t get that out of the box.

                                                                                  Speaking of PHP, it’s an example where even changes to the language have improved quality of programs written in it. People programming for “old PHP” wrote worse code than when writing for “new PHP”. PHP removed footguns like HTTP includes. It removed magic quotes and string-gluing mysql extension, encouraging use of prepared statements. It standardized autoloading of libraries, which meant more people used frameworks instead of copy-pasting their terrible code.

                                                                                  1. 4

                                                                                    You’re reducing languages to the Turing Tarpit, overlooking human factors, differences in conventions, standards in their ecosystems, and varying difficulty in achieving goals in each language.

                                                                                    Why not say ecosystems then? I specifically wrote that I was talking about implementations, ecosystems, etc., so I don’t feel like I am reducing anything here.

                                                                                    The fact that it’s demonstrably possible to screw up an implementation in a language X just as much as in language Y, doesn’t mean they’re equivalent and irrelevant.

                                                                                    I did not claim that.

                                                                                    Even when the same bugs can be written in different languages, they are not equally likely to be written in every language. This difference in likelihood is very important.

                                                                                    I did not claim otherwise.

                                                                                    Both could use channels equally frequently, but they don’t.

                                                                                    That is not a factor of the language though. And it’s certainly not defining software, which the actual topic was.

                                                                                    GTA written in pure Python wouldn’t be fast, even if it used all the best designs and all the right algorithms.

                                                                                    Again, I did not claim so.

                                                                                    And in this case the language choice was the cause of the failure. Languages aren’t just 1-dimensional “fast<>slow”, but have other aspects that affect programs written in them. In this case the other aspects of the language — its clunky standard library and cumbersome handling of dependencies — were the culprit.

                                                                                    Yes, but as you say languages are not 1-dimensional. There’s tade-offs, many factors why languages are chosen, and the language chosen might lead to issues, but they tend to not define the software. Unless you see some stack trace, or see some widget toolkit tied closely to a language you usually can’t tell the language something is written in, unless it’s badly designed (and I don’t mean by some super-human programmer, I mean average).

                                                                                    In Rust you can add an inexperienced member to the team, tell them not to write unsafe, and they will be mostly harmless. They’ll write unidiomatic and sometimes inefficient code, but they won’t cause memory corruption. In C++, putting a noob on the team is a higher risk, and they could do a lot more damage.

                                                                                    They can still make your program crash, can still cause all sorts of other security issues, from sql injections to the classic “personal customer data has been exposed publicly on the internet”. Yes, memory safety is a different topic, you don’t need to re-iterate that over and over. I don’t think anyone seriously claims either that C++ will result in better memory safety, nor that that isn’t an issue. The topic at hand is whether the choice of a language defines software. And I’d argue in most situations it doesn’t. Yes, if you have memory safety issues you wanna get rid of switch to Rust, if you want to not have issues with deployment, go away from Python. However, like you say, that’s only two dimensions and there is more than that for language choice. Else there would be that one language everyone uses - at least for new projects.

                                                                                    A bit less anecdata is https://github.com/rust-fuzz/trophy-case While it demonstrates that Rust programs aren’t always perfect, it also shows how successful Rust is at lowering severity of the bugs. Exploitable memory issues are rare, and majority are panics, which are technically equivalent of exceptions thrown.

                                                                                    Yep, memory safety issues are bad, and the consequences horrible. But think about the C software you use on a daily basis. What percentage of that software do you think is defined through memory safety issues or being written in C in any other way? Of all the software I can think of, it’s only OpenSSL and image (or XML sigh) parsers for me. That’s bad enough, but it’s still not much the total amount and sadly it’s even pieces that are used by many other languages. Don’t get me wrong, I absolutely hope that soon enough everyone will for example use a Rust implementation of TLS. Also for most encoders/decoders it would be great. I’d be very happy, if it was rewritten. Still I think most other software I use, including libraries there is other defining factors than the language which is exactly why they can be rewritten in Rust, without too much headache. If the language was defining the software that would be a problem.

                                                                                    1. 1

                                                                                      Both [C and Go] could use channels equally frequently, but they don’t.

                                                                                      That is not a factor of the language though. And it’s certainly not defining software, which the actual topic was.

                                                                                      How is it not? Channels are a flagship feature of the Go language. This influences how typical Go programs are designed and implemented, and consequently it is a factor in their bugs and features.

                                                                                      The way I understand your argument is that one does not have to use channels in Go and can write a very C-like Go program, or may use C to write something with as much concurrency and identical behaviors as a channel-based Go program, and this disproves that programs are defined by their language. But I call this reducing languages to the Turing Tarpit, and don’t consider that relevant, because these are very unlikely scenarios. In practice, it’s not equally easy to do both. Given real-world constraints on skill and effort, Go and C programs will end up with different architectures and different capabilities typical for their language, and therefore “Written in Go” and “Written in C” will have a meaning.

                                                                                      Unless you see some stack trace, or see some widget toolkit tied closely to a language you usually can’t tell the language something is written in, unless it’s badly designed

                                                                                      There are many ways in which languages affect programs, even if just viewed externally:

                                                                                      • ease of installation and deployment (how they handle dependencies, runtimes or VMs). I’m likely to have an easier time running a static Go binary than a Python program.
                                                                                      • interoperability (portability across system versions, platforms, compatibility with use as a library, in embedded systems, in kernels, etc.). Wrong glibc.so version is not an issue for distribution of JS programs, except those having C++ dependencies.
                                                                                      • startup speed. if a non-trivial program starts in milliseconds, I know it’s not Java or Python.
                                                                                      • run-time performance. Yes, there are exceptions and bad implementations, but languages still impose limits and influence typical performance. Sass compilers switched from Ruby to C++, and JS bundlers are switching from JS to Go or Rust, because these language changes have a very visible impact on user experience.
                                                                                      • multi-core support. In some languages it’s hard not to be limited to single-threaded performance, or have fine-grained multi-threading without increased risk of defects.
                                                                                      • stability and types of bugs. Languages have different weaknesses. If I feed an invalid UTF-8 to a Rust program, I won’t be surprised if it panics. If I feed the same to JS, I’ll expect some UCS-2 mojibake back. If I feed it to C, I expect it to preserve it byte by byte, up to the first \0.

                                                                                      So “Written in X” does tell me which problems I’m signing up for.

                                                                                      1. 1

                                                                                        How is it not? Channels are a flagship feature of the Go language. This influences how typical Go programs are designed and implemented, and consequently it is a factor in their bugs and features.

                                                                                        That was in response to you saying that it could be done in C, and that it isn’t done for the language community. I was responding to that. What people do with a language in my opinion isn’t what a language is. Take JavaScript. Then look at its history. Then look at node.js, etc. It’s not the language that defined this path it’s the community. In a similar way it could be that C programmers start to using channels or something else as a default way to do things.

                                                                                        Anyways, I am not saying that channels are not an integral feature of Go and tied to the language, it’s a feature that part of the specification. So of course it is.

                                                                                        ease of installation and deployment (how they handle dependencies, runtimes or VMs). I’m likely to have an easier time running a static Go binary than a Python program.

                                                                                        I stated the same thing!

                                                                                        [more points from the list “There are many ways in which languages affect programs” ]

                                                                                        I’d argue that this is at least partly implementation dependent, and I think that shows with things like FaaS. But I agree, that depends on the language. Again I am not trying to argue that languages aren’t different from each other. We keep coming back to this.

                                                                                        So “Written in X” does tell me which problems I’m signing up for.

                                                                                        Again something I don’t disagree with.

                                                                                        My point is that we see in the real world that C/C++ software is being rewritten in Rust. Often times that is done without the average user even noticing. Heck, there is even Rust in browsers where developers thought it was JavaScript. What I am arguing is that if this is the case, then a language doesn’t define a piece of software.

                                                                                  2. 6

                                                                                    It’s much harder to write bad Rust code than to write bad C or bad JS or bash. Good languages make “bad programmers” write better code.

                                                                                    While I agree with that sentiment, I’d be curious about any studies on that.

                                                                                    As reported by both Google and Microsoft around 70% of security issues are caused by memory safety bugs. So any suitable language which prevents memory safety issues will lead to programmers writing better code (at least security wise).

                                                                                    1. 1

                                                                                      Okay, thanks for the clarification. I thought you meant better code as in code quality not as in memory safety. And yes, it’s a quality of the software that you don’t have such issues, but still not the first thing that comes to mind when I read “code quality”.

                                                                                      1. 1

                                                                                        Memory safety issues, at best, cause crashes. At worst, they allow someone that can send you malicious data to execute arbitrary code with the privileges of the program. I’m curious of your definition of ‘code quality’ that includes arbitrary code execution vulnerabilities.

                                                                                        1. 1

                                                                                          So in your understanding there is zero distinction of quality C code and non-quality C code?

                                                                                          Yes, I do think there is a difference.

                                                                                          Please refrain from making up a “definition of ‘code quality’” that somehow is “mine” by completely ignoring the context. Also please stop pretending I am defending C or not having memory safe code/languages. That misses the point of the thread.

                                                                                          But since you asked for the definition. I was not talking about the design of the language here, but the design of the code. So when I was talking about code quality here I was talking about readable, maintainable, understandable code. That doesn’t mean that languages themselves may have designs that make this hard or for example make secure code hard to achieve. Maybe the following helps understanding. Another trait is a garbage collector, which memory safety a lot easier, but might result in performance implications. You can still write performant and non-performant code in a garbage collected language. Also some languages make this harder and some make it easier.

                                                                                          However, the topic is whether a language defines a piece of software. Given that software can be rewritten from C++ to Rust (parts of Firefox, etc.) or C to Rust (see the Tor project), I would not say that C defined these pieces of software.

                                                                                          I also think that not including outlier situations (Python vs Go) overall software in various languages has been both hard and easy to deploy, compile, etc. I think C is a great example for some of the most horrible situations and some of the easiest. I know C projects where you need to spend ages just to prepare for a build and I know others where you dig out the most obscure platform, type in one or two commands and it just compiles and works.

                                                                                          For reasons like these I do find that a language defining a software is often an overstatement. However, it certainly can be a reason to struggle with it.

                                                                                          You don’t (usually) install a browser, a video game, a tool, run it for the first time and given it’s working think “Huh, it’s written in language X”. If running it fails or, if your video game tells you about its engine and so on I think that should be considered the exception. And I’d hardly call it “defining” the software.

                                                                                          I thought that would make sense, but something in what I am writing seems to sound like I am arguing that not having memory safety is a good thing or that I am defending C. Rust and C weren’t brought up by me. I also don’t think it’s a good idea to assume that programmers don’t make mistakes. If you could point out where I appear to take that point of view I’d be really curious. I’ve been hyped up of the prospect of people switching away from unsafe languages ever since the D programming language was announced. So I feel baffled when I come across as advocating unsafe languages.

                                                                                2. 4

                                                                                  Given two project implementing the same functionality (some sort of network service), both implemented in similar time by equally competent developers. If one was in C and other in Rust - would you be equally willing to expose them over the network?

                                                                                  1. 4

                                                                                    [Sorry, this is a longer response, because I don’t want to be misunderstood here]

                                                                                    That’s an abstract/theoretical example though. In reality I will gladly use nginx, OpenSSH, etc. over what exists in Rust, but not because of the language choice, which is the whole point I am trying to make. At the same time I wouldn’t trust other software written in C.

                                                                                    Let me ask you a question. Would you rather build your company on a real life kernel connected to the internet written in C or in Rust in the here and now?

                                                                                    I am maybe a weird one out, but I usually take a look over the code and project that I am using. There is situations where C code is simply more trustworthy. Often in “new” languages you end up with authors basing on stuff that is new themselves, so you can end up being the guinea pig. There’s also a couple of signs that are usually good for a project, not because you need them on your own, but because they are a good factor for maturity. One of them is portability. Another is not just throwing a docker container on you, because the build is to complex otherwise.

                                                                                    And of course these things change. Might very well be that in future I’ll use a Rust implementation of something like OpenSSH (in the sense that OpenSSH isn’t just an implementation of the protocol). But right now in the real world I don’t intend to. I’d be way too worried about non-memory safety bugs.

                                                                                    But let’s go a bit more abstract than that. Let’s talk an abstract service and let us only focus on how it is built and not what it actually does. If I got to choose between an exposed service under your premises in memory-safe node.js/JavaScript or in non-memorysafe kcgi/C and my concern is being hacked I can see myself going for the latter.

                                                                                    In your example, sure I’d go by Rust. But the thing here is that it’s because of missing context.

                                                                                    But it isn’t language dependent. For example for non-language factors I am pretty certain that there is more PHP and more node.js code out there with SQL injection bugs than probably in Python and Ruby. Does that mean I should choose the latter if I am interacting with databases? I’d also assume that in the real world on average node.js applications are more susceptible to DOS-attacks (the non distributed version) than for example PHP, simply because of how they are typically deployed/run. That also doesn’t have much to do with the language.

                                                                                    I am focusing on the title of the article here. “Software is not defined by the language it’s written in” I really don’t think languages are the thing that defines software. I’ve seen surprisingly good, solid Perl code and I’ve seen horrible Java, Python and Rust code. The reason for that was never the language. Other examples I can think of is when Tor was still in C (and they switch to Rust for good reasons, pretty successfully), and there are similar project in Java and Python for example. But I wouldn’t use them, because while they are written in a memory safe language I’d trust Tor more.

                                                                                    I think the usual reason why a project is started in a certain language is this. A developer has a number of values, usually also found in the project. In the time when the project is started people with these language gravitate towards a language or a set of them. So when you look at a project and the chosen language you will usually see a time capsule of how languages were viewed during that time. But these views change. A project is started in C for other reason than 10, 20, 30 years ago. The same is true for Python, Go, and even Rust has seen that shift already. While the “more secure than C/C++, yet more performant” is a constant theme, things like web assembly, adoption by projects, even changes in the language are a factor that made people use or not use Rust for new projects (anymore).

                                                                                    1. 3

                                                                                      Would you rather build your company on a real life kernel connected to the internet written in C or in Rust in the here and now?

                                                                                      That’s rather easy since there is no mature rust based kernel that had similar (same order of magnitude) man hours spent on it as linux or even *bsd.

                                                                                      1. 3

                                                                                        That’s the very point I am trying to make. The language is not the defining factor of a piece of software. As you mention the hours spent on a piece of software is a way bigger factor in this case.

                                                                                        1. 2

                                                                                          I just said that I would prefer kernel in rust that had X hours of development from kernel in C that had 10X hours of development. How is that not a defining factor? :)

                                                                                          A good real world example of that is ack vs ripgrep mentioned in other threads. Both tools solve the same problem, yet it’s no surprise that the ripgrep is significantly faster. And that’s with (as far as I understand) significant parts of ack being written in C (the perl regex engine). Imagine how much slower would it be in pure perl. Another dimension that these tools differ in predictable way is distribution. I can get statically linked binary of ripgrep that has zero dependencies that can run on linux. With ack there is no such option.

                                                                                          You can claim that technically you can have memory safety, performance and easy distribution in any language. That’s technically true, but in practice it’s not the case and you can predict quite well how different tools will handle those dimensions based on the implementation language they are written in.

                                                                                3. 12

                                                                                  The genre is “police procedural”, not “mystery”.

                                                                                4. 43

                                                                                  I still like Zulip after about 5 years of use, e.g. see https://oilshell.zulipchat.com . They added public streams last year, so you don’t have to log in to see everything. (Most of our streams pre-date that and require login)

                                                                                  It’s also open source, though we’re using the hosted version: https://github.com/zulip

                                                                                  Zulip seems to be A LOT lower latency than other solutions.

                                                                                  When I use Slack or Discord, my keyboard feels mushy. My 3 GHz CPU is struggling to render even a single character in the browser. [1]

                                                                                  Aside from speed, the big difference between Zulip and the others is that conversations have titles. Messages are grouped by topic.

                                                                                  The history and titles are extremely useful for avoiding “groundhog day” conversations – I often link back to years old threads and am myself informed by them!

                                                                                  (Although maybe this practice can make people “shy” about bringing up things, which isn’t the message I’d like to send. The search is pretty good though.)

                                                                                  When I use Slack, it seems like a perpetually messy and forgetful present.

                                                                                  I linked to a comic by Julia Evans here, which illustrates that feature a bit: https://www.oilshell.org/blog/2018/04/26.html

                                                                                  [1] Incidentally, same with VSCode / VSCodium? I just tried writing a few blog posts with it, because of its Markdown preview plugin, and it’s ridiculously laggy? I can’t believe it has more than 50% market share. Memories are short. It also has the same issue of being controlled by Microsoft with non-optional telemetry.

                                                                                  1. 9

                                                                                    +1 on zulip.

                                                                                    category theory https://categorytheory.zulipchat.com/ rust-lang https://categorytheory.zulipchat.com/

                                                                                    These are examples of communities that moved there and are way easier to follow than discord or slack.

                                                                                    1. 9

                                                                                      Zulip is light years ahead of everything else in async org-wide communications. The way the messages are organized makes it extremely powerful tool for distributed teams and cross-team collaboration.

                                                                                      The problems:

                                                                                      • Clients are slow when you have 30k+ unread messages.
                                                                                      • It’s not easy (possible?) to follow just a single topic within a stream.
                                                                                      • It’s not federated.
                                                                                      1. 12

                                                                                        We used IRC and nobody except IT folks used it. We switched to XMPP and some of the devs used it as well. We switched to Zulip and everyone in the company uses it.

                                                                                        We self-host. We take a snapshot every few hours and send it to the backup site, just in case. If Zulip were properly federate-able, we could just have two live servers all the time. That would be great.

                                                                                        1. 6

                                                                                          It’s not federated.

                                                                                          Is this actually a problem? I don’t think most people want federation, but easier SSO and single client for multiple servers gets you most of what people want without the significant burdens of federation (scaling, policy, etc.).

                                                                                          1. 1

                                                                                            Sorry for a late reply.

                                                                                            It is definitely a problem. It makes it hard for two organizations to create shared streams. This comes up e.g. when an organization with Zulip for internal communications wants to contract another company for e.g. software development and wants them to integrate into their communications. The contractor needs accounts at the client’s company. Moreover, if multiple clients do this, the people working at the contracted company now have multiple scattered accounts at clients’ instances.

                                                                                            Creating stream shared and replicated across the relevant instances would be way easier, probably more secure and definitely more scalable than adding wayf to relevant SSOs. The development effort that would have to go into making the web client connect to multiple instances would probably be also rather high and it would not be possible to perform it incrementally. Unlike shared streams that might have some features disabled (e.g. custom emojis) until a way forward is found for them.

                                                                                            But I am not well versed in the Zulip internals, so take this with a couple grains of sand.

                                                                                            EDIT: I figure you might be thinking of e.g. open source projects each using their own Zulip. That sucks and it would be nice to have a SSO service for all of them. Or even have them somehow bound together in some hypothetical multi-server client. I would love that as well, but I am worried that it just wouldn’t scale (performance-wise) without some serious though about the overall architecture. Unless you are thinking about the Pidgin-style multi-client approach solely at the client level.

                                                                                        2. 7

                                                                                          This is a little off topic, but Sublime Text is a vastly more performant alternative to VSCode.

                                                                                        3. 3

                                                                                          I feel like topic-first organization of chats is, which Zulip does, is the way to go.

                                                                                            1. 16

                                                                                              It still sends some telemetry even if you do all that

                                                                                              https://github.com/VSCodium/vscodium/blob/master/DOCS.md#disable-telemetry

                                                                                              That page is a “dark pattern” to make you think you can turn it off, when you can’t.


                                                                                              In addition, extensions also have their own telemetry, not covered by those settings. From the page you linked:

                                                                                              These extensions may be collecting their own usage data and are not controlled by the telemetry.telemetryLevel setting. Consult the specific extension’s documentation to learn about its telemetry reporting and whether it can be disabled.

                                                                                              1. 4

                                                                                                It still sends some telemetry even if you do all that

                                                                                                I’ve spent several minutes researching that, and, from the absence of clear evidence that telemetry is still being sent if disabled (which evidence should be easy to collect for an open codebase), I conclude that this is a misleading statement.

                                                                                                The way I understand it, VS Code is a “modern app”, which uses a boatload online services. It does network calls to update itself, update extensions, search in the settings and otherwise provide functionality to the user. Separately, it collects gobs of data without any other purpose except data collection.

                                                                                                Telemetry disables the second thing, but not the first thing. But the first thing is not telemetry!

                                                                                                • Does it make network calls? Yes.
                                                                                                • Can arbitrary network calls be used for tracking? Absolutely, but hopefully the amount of legal tracking allowable is reduced by GDPR.
                                                                                                • Should VS Code have a global “use online services” setting, or, better yet, a way to turn off node’s networking API altogether? Yes.
                                                                                                • Is any usage of Berkeley socket API called “telemetry”? No.
                                                                                                1. 3

                                                                                                  It took me awhile, but the source of my claim is from VSCodium itself, and this blog post:

                                                                                                  https://www.roboleary.net/tools/2022/04/20/vscode-telemetry.html

                                                                                                  https://github.com/VSCodium/vscodium/blob/master/DOCS.md#disable-telemetry

                                                                                                  Even though we do not pass the telemetry build flags (and go out of our way to cripple the baked-in telemetry), Microsoft will still track usage by default.

                                                                                                  Also, in 2021, they apparently tried to deprecate the old setting and introduce a new one:

                                                                                                  https://news.ycombinator.com/item?id=28812486

                                                                                                  https://imgur.com/a/nxvH8cW

                                                                                                  So basically it seems like it was the old trick of resetting the setting on updates, which was again very common in the Winamp, Flash, and JVM days – dark patterns.

                                                                                                  However it looks like some people from within the VSCode team pushed back on this.

                                                                                                  Having worked in big tech, this is very believable – there are definitely a lot of well intentioned people there, but they are fighting the forces of product management …


                                                                                                  I skimmed the blog post and it seems ridiculously complicated, when it just doesn’t have to be.

                                                                                                  So I guess I would say it’s POSSIBLE that they actually do respect the setting in ALL cases, but I personally doubt it.

                                                                                                  I mean it wouldn’t even be a dealbreaker for me if I got a fast and friendly markdown editing experience! But it was very laggy (with VSCodium on Ubuntu.)

                                                                                                  1. 2

                                                                                                    Yeah, “It still sends some telemetry even if you do all that” is exactly what VS Codium claim. My current belief is that’s false. Rather, it does other network requests, unrelated to telemetry.

                                                                                                2. 2

                                                                                                  These extensions may be collecting their own usage data and are not controlled by the telemetry.telemetryLevel setting.

                                                                                                  That is an … interesting … design choice.

                                                                                                  1. 7

                                                                                                    At the risk of belaboring the point, it’s a dark pattern.

                                                                                                    This was all extremely common in the Winamp, Flash, and JVM days.

                                                                                                    The thing that’s sad is that EVERYTHING is dark patterns now, so this isn’t recognized as one. People will actually point to the page and think Microsoft is being helpful. They probably don’t even know what the term “dark pattern” means.

                                                                                                    If it were not a dark pattern, then the page would be one sentence, telling you where the checkbox is.

                                                                                                    1. 6

                                                                                                      They probably don’t even know what the term “dark pattern” means.

                                                                                                      I’d say that most people haven’t been exposed to genuinely user-centric experiences in most areas of tech. In fact, I’d go so far as to say that most tech stacks in use today are actually designed to prevent the development of same.

                                                                                                      1. 2

                                                                                                        The thing that feels new is how non-user-centric development tools are nowadays. And the possibility of that altering the baseline perception of what user-centric tech looks like.

                                                                                                        Note: feels; it’s probably not been overly-user-centric in the past, but they were a bit of a haven compared to other areas of tech that have overt contempt for users (social media, mobile games, etc).

                                                                                                    2. 4

                                                                                                      That is an … interesting … design choice.

                                                                                                      How would you do this differently? The same is true about any system with plugins, including, eg, Emacs and Vim: nothing prevents a plug-in from calling home, except for the goodwill of the author.

                                                                                                      1. 3

                                                                                                        Kinda proves the point, tbh. To prevent a plugin from calling home, you have to actually try to design the plugin API to prevent it.

                                                                                                        1. 4

                                                                                                          I think the question stands: how would you do it differently? What API would allow plugins to run arbitrary code—often (validly) including making network requests to arbitrary servers—but prevent them from phoning home?

                                                                                                          1. 6

                                                                                                            Good question! First option is to not let them make arbitrary network requests, or require the user to whitelist them. How often does your editor plugin really need to make network requests? The editor can check for updates and download data files on install for you. Whitelisting Github Copilot or whatever doesn’t feel like too much of an imposition.

                                                                                                            1. 4

                                                                                                              Capability security is a general approach. In particular, https://github.com/endojs/endo

                                                                                                              For more… https://github.com/dckc/awesome-ocap

                                                                                                            2. 3

                                                                                                              More fun: you have to design a plugin API that doesn’t allow phoning home but does allow using network services. This is basically impossible. You can define a plugin mechanism that has fine-grained permissions and a UI that comes with big red warnings when things want network permissions though and enforce policies in your store that they must report all tracking that they do.

                                                                                                            3. 1

                                                                                                              nothing prevents a plug-in from calling home, except for the goodwill of the author.

                                                                                                              Traditionally, this is prevented by repos and maintainers who patch the package if it’s found to be calling home without permission. And since the authors know this, they largely don’t add such functionality in the first place. Basically, this article: http://kmkeen.com/maintainers-matter/ (http only, not https).

                                                                                                              1. 1

                                                                                                                We don’t necessarily need mandatory technical enforcement for this, it’s more about culture and expectations.

                                                                                                                I think that’s the state of the art in many ecosystems, for better or worse. I’d say:

                                                                                                                • The plugin interface should expose the settings object, so the plugin can respect it voluntarily. (Does it currently do that?)
                                                                                                                • The IDE vendor sets the expectation that plugins respect the setting
                                                                                                                • A plugin that doesn’t respect it can be dealt with in the same way that say malware is dealt with.

                                                                                                                I don’t know anything about the VSCode ecosystem, but I imagine that there’s a way to deal with say plugins that start scraping everyone’s credit card numbers out of their e-mail accounts.

                                                                                                                Every ecosystem / app store- type thing has to deal with that. My understanding is that for iOS and Android app stores, the process is pretty manual. It’s a mix of technical enforcement, manual review, and documented culture/expectations.


                                                                                                                I’d also not rule out a strict sandbox that can’t make network requests. I haven’t written these types of plugins, but as others pointed out, I don’t really see why they would need to access the network. They could be passed the info they need, capability style, rather than searching for it all over your computer and network!

                                                                                                                1. 1

                                                                                                                  Sure, but they don’t offer a “disable telemetry” setting.

                                                                                                                  What I’d do, would be to sandbox plugins so they can’t do any network I/O, then have a permissions system.

                                                                                                                  You’d still rely on an honour system to an extent; because plugin authors could disguise the purpose of their network operations. But you could at least still have a single configuration point that nominally controlled telemetry, and bad actors would be much easier to spot.

                                                                                                                  1. 1

                                                                                                                    There is a single configuration point which nominally controls the telemetry, and extensions should respect it. This is clearly documented for extension authors here: https://code.visualstudio.com/api/extension-guides/telemetry#custom-telemetry-setting.

                                                                                                          2. 🇬🇧 The UK geoblock is lifted, hopefully permanently.