Threads for zik

  1. 3

    This seems like a case of the use of UB in optimisation being taken a bit too far: “just because you can doesn’t mean you should’

    If an optimisation is likely to cause bugs and confusion maybe it’s better not to do that optimisation, even if it’s theoretically allowed.

    1. 14

      This is… an understandable but poorly reasoned view.

      The compiler isn’t performing any insane leaps of logic here, or deliberately trying to break anything. What you’re seeing is just the cumulative effect of many relatively simple optimisation passes applied in succession. Even very simple optimisations, when applied in combination, can produce unintuitive results, and that’s not a bug: it’s a feature! It’s why your programs run fast.

      If we were to try to draw a line in the sand and try to rigorously define what optimisations count as being ‘too clever’ then we’d need to come up with a specification: what things are okay for the compiler to exploit, and what aren’t?

      Guess what: we already have this specification, and it’s called the C standard! Your complaint seems less about the compiler and more about what the standard permits the compiler to do.

      1. 10

        Your complaint seems less about the compiler and more about what the standard permits the compiler to do.

        I will complain about the compilers! Or, rather, I will say that I think this article is entirely correct when it says that the ideas which have been built up over the years, by compiler authors, about the definition of UB are not supported by the text of the standard and are at best based in a misreading of the standard.

        The specific quotation that matters here is:

        Undefined behavior — behavior, upon use of those particular nonportable or erroneous program constructs, or of erroneous data, or of indeterminately-valued objects, for which the Standard imposes no requirements.

        Compiler authors treat this as if it reads:

        BEGIN DEFINITION OF TERM “UNDEFINED BEHAVIOR”

        behavior, upon use of those particular nonportable or erroneous program constructs, or of erroneous data, or of indeterminately-valued objects

        END DEFINITION OF TERM “UNDEFINED BEHAVIOR”

        BEGIN DEFINITION OF HANDLING OF UNDEFINED BEHAVIOR

        the Standard imposes no requirements.

        END DEFINITION OF HANDLING OF UNDEFINED BEHAVIOR

        But getting to that interpretation requires torturing the plain English text. It’s clear that the correct, intended interpretation is:

        BEGIN DEFINITION OF TERM “UNDEFINED BEHAVIOR”

        behavior, upon use of those particular nonportable or erroneous program constructs, or of erroneous data, or of indeterminately-valued objects, for which the Standard imposes no requirements.

        END DEFINITION OF TERM “UNDEFINED BEHAVIOR”

        In other words, “for which the Standard imposes no requirements” is part of the definition of UB, not of the handling of UB. Or: rather than “UB means the Standard doesn’t impose requirements on handling”, it’s “a thing is only UB if the Standard hasn’t imposed requirements for how to handle that thing”.

        It becomes even more clear, as noted in the linked article, when you notice that the Standard immediately follows this up with a paragraph giving permissible ways to handle undefined behavior. If the intent was that Standard “imposes no requirements” on handling UB, why does it have a paragraph listing permissible ways to handle UB? The fact that the Standard was hackily edited for C99 (again, refer to the article) to try to make that followup paragraph non-normative is even more evidence.

        But the edit to that paragraph also drives home the fact that the text of the Standard doesn’t matter and never did. Compiler authors effectively invented their own language that wasn’t C as defined by the Standard, told everyone it was good to have this break from the Standard because they could use it to “optimize” programs, and then ad-hoc changed the Standard years later to make the compiler authors’ approach no longer a violation.

        And this gets back to something I sort of joke about but don’t find funny, which is that the expansive, Standard-violating definition of UB favored by compiler authors has ensured that it is effectively impossible to write a non-trivial C program, or even many trivial ones, without UB, which means that a “compiler” which simply emits a no-op executable for every input is arguably compliant.

        1. 8

          In other words, “for which the Standard imposes no requirements” is part of the definition of UB, not of the handling of UB. Or: rather than “UB means the Standard doesn’t impose requirements on handling”, it’s “a thing is only UB if the Standard hasn’t imposed requirements for how to handle that thing”.

          There are things that the standard specifically says is undefined behaviour. Eg the example from that very definition:

          EXAMPLE An example of undefined behavior is the behavior on integer overflow.

          The article you’ve linked uses that line of reasoning to somehow say that there is some behaviour mandated for integer overflow, despite it being right there in the example that it has undefined behaviour.

          The behaviour mandated, according to Yodaiken’s article, is:

          Possible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message)

          (The article argues that the original “permissible” had different semantics than the “possible” which replaced it, and that the “original” permissible was more correct, even though it was intentionally changed. I hope it’s clear why that’s a tenuous position to take). It’s right there in that text:

          ignoring the situation completely with unpredictable results

          That’s exactly what we’re seeing when integer overflow causes what the OP’s article calls “wild” behaviour - the compiler ignores the situation (that overflow happened) completely and optimises assuming it didn’t happen. Victor’s post twists this to mean that the compiler should “ignore” that the undefined behaviour was triggered and produce a result for the operation. He ignores the “unpredictable results” clause.

          In other words, “for which the Standard imposes no requirements” is part of the definition of UB, not of the handling of UB

          You can’t have one without the other. If it imposes requirements on the handling, then it imposes requirements. And then it would not be the case that it “imposes no requirements”.

          In any case, there is more detailed explanation of what constitutes undefined behavior elsewhere in the text (chapter 4):

          If a ‘‘shall’’ or ‘‘shall not’’ requirement that appears outside of a constraint is violated, the behavior is undefined. Undefined behavior is otherwise indicated in this International Standard by the words ‘‘undefined behavior’’ or by the omission of any explicit definition of behavior. There is no difference in emphasis among these three; they all describe ‘‘behavior that is undefined’’

          So “undefined behaviour” also means “the behaviour is undefined”. It’s incredibly tenuous to claim that there are in fact requirements on the behaviour imposed, and that furthermore these requirements are stated only in the “definitions” section of the document (which in general does not specify requirements at all).

          It’s also tenuous to claim that the “undefined behaviour” of integer overflow should just result in an implementation-defined value, despite the fact that there is also a defined term “implementation-defined value” which could have been used to describe that case.

          I.e. your argument is that integer overflow is an example of undefined behaviour and so should have implementation-determined value, despite the fact that the standard makes it quite specific in other cases when a value is implementation-determined but does not do so for integer overflow.

          getting to that interpretation requires torturing the plain English text.

          I strongly disagree and would say the same about the interpretation you’re arguing for, with the evidence that I’ve stated.

          1. 2

            You can’t have one without the other. If it imposes requirements on the handling, then it imposes requirements. And then it would not be the case that it “imposes no requirements”.

            The “imposes no requirements” should logically be read as meaning no other part of the Standard tells you how to handle this thing. Other parts of the Standard might sometimes have instructions for how to handle, say, an indeterminately-valued object, which would mean that for the situations where the Standard does provide instructions, it’s not UB.

            Hence this must be read as part of the definition of UB, not part of the handling of UB. The following paragraph tells you what to do with it, and this is not a contradiction.

            I strongly disagree and would say the same about the interpretation you’re arguing for, with the evidence that I’ve stated.

            Once again, the sentence is:

            Undefined behavior — behavior, upon use of those particular nonportable or erroneous program constructs, or of erroneous data, or of indeterminately-valued objects, for which the Standard imposes no requirements.

            The “official” reading says to split this into everything before the final comma and everything after the final comma, and to treat the former as the definition of UB and the latter as the handling of it. There is absolutely no world in which that is the natural plain-English reading of that sentence. And, again, if the intent was to have that be the meaning, then there would be no reason to ever have the following paragraph, let alone to originally have that paragraph be normative.

            I admit that the expansive definition of UB and how to handle it is a fait accompli at this point, but I wish its proponents would be honest and admit that getting there required willful violation of the standard.

            1. 2

              The “imposes no requirements” should logically be read as meaning no other part of the Standard tells you how to handle this thing.

              But it doesn’t say that at all. You’re inserting “other part”; it’s not there, and was never there, in the text. The text says, “for which this standard imposes no requirements”. If the following part were in fact imposing requirements on the handling or any other aspect, then it would certainly be contradicting itself.

              And, again, if the standard was indeed intending to impose requirements on how undefined behaviour should be “handled”, the correct place to do that would certainly not be in the definitions section.

              And, again, finally, one of the “permissible” (or “possible”) behaviours is “ignoring the situation completely with unpredictable results”, which can easily be read to mean exactly what compilers are currently doing. Even if you think the intention was to place requirements on the handling of UB, this allowance seems by itself to remove any requirements. And certainly, “unpredicatable results” doesn’t match with the notion that overflow should wrap (or otherwise match the behaviour of arithmetic instructions on the underlying hardware platform).

              There is absolutely no world in which that is the natural plain-English reading of that sentence

              I strongly disagree (and so do many others).

              if the intent was to have that be the meaning, then there would be no reason to ever have the following paragraph

              To elaborate. Admittedly, it seems like it could’ve been a NOTE rather than normative text, but it also hardly seems to matter. It wouldn’t be the only place where not-strictly-needed normative text was present.

              I wish its proponents would be honest and admit that getting there required willful violation of the standard.

              I feel like that’s designed to be inflammatory. Can you give a constructive response to the above criticisms of your standpoint? You have ignored these points in your reply; this feels more like dishonesty to me than not “admitting” that there was a “willful violation of the standard”.

              1. 3

                It’s not intended to be inflammatory. It just is what it is. The linked article made the argument pretty thoroughly, and I can’t think of any other situation in English where a similar sentence construction is naturally read in the way the definition of UB is allegedly meant to be read. The simple truth is that compiler authors did what they believed was most convenient for their purposes, it’s at odds with what the standard originally said, and now we’re stuck with it as a deeply-embedded part of C.

                I just wish there were more honesty about it, and less condescension (which is inflammatory) in these threads toward people who don’t like the status quo. Lots of things that compilers rely on being UB could instead have been implementation-defined or otherwise had clear semantics. It wouldn’t be the end of all performance forever. There’s no logically-necessary reason why C compilers behave the way they do – it’s just historical inertia that could have been avoided, and in more recent and better-designed languages is avoided.

                1. 2

                  Similar to how law cannot be read as common English but instead interpreted through decades and centuries of clarification, precedent, and interpretation, the same is true of a standard document. Standard English just isn’t good enough to describe things unambiguously enough to have uncontestable meaning. The standard means what people say the standard means and if the creators of the original standard want to clarify as otherwise, they should speak out and clarify their intentions.

                  1. 1

                    The linked article made the argument pretty thoroughly

                    It didn’t, though. It doesn’t address the points I raised (and nor have you).

                    I can’t think of any other situation in English where a similar sentence construction is naturally read in the way the definition of UB is allegedly meant to be read

                    It doesn’t require the strange reading that you use to come to the conclusion that the definition of UB. The error in reading is yours; you think that “handling” and “behaviour” are two separate things, but they aren’t; furthermore if the definition specifies (as it does) that part of the definition of undefined behaviour is that the standard imposes no requirements on it then this means that the standard can impose no requirements on it - nor any aspect of it such as “handling” (although as I’ve said, this makes no sense anyway) - since that would be a contradiction.

                    Once more, with emphasis added:

                    Undefined behavior — behavior, upon use of a nonportable or erroneous program construct, or of erroneous data, or of indeterminately-valued objects, for which the Standard imposes no requirements.

                    Undefined behaviour is the behaviour when the erroneous construct (etc) is used, it is not the use itself. If “the standard imposes no requirements” on the behavior then it cannot then, in the next paragraph which is part of the standard, impose requirements on that behaviour. This notion that “it is now going to separately talk about the handling of such behaviour” is wrong because (a) the behaviour is the handling and (b) it’s already clearly stated that there are no requirements on such. But also, perhaps most importantly, (c), the “permissions” (one of them in particular) for “handling” are so general as to impose no requirements anyway, and very specifically contradict the integer overflow behaviour that you argue for (via the “unpredictable results” phrasing). You consistently fail to address this. The article by Yodaiken fails to address it also.

                    The simple truth is that […]

                    No, it’s just not. (We can both play that game - just stating something with an air of authority doesn’t make it true.)

                    1. 1

                      You and I did not get to consensus when we did this a year ago, so I have low hopes for it now.

                      wrong because (a) the behaviour is the handling and (b) it’s already clearly stated that there are no requirements on such

                      And I strongly disagree with (b), and last time we went round and round on this you seemed to understand why. The plain English reading of the Standard’s text does not lead to the idea that the Standard “imposes no requirements” on UB, and even if it did, then it makes no sense whatsoever for the very next paragraph to be a normative list of permissible ways to handle UB. I don’t know any plainer way to put this, and if you can’t see it, we should just stop talking past each other.

                      1. 2

                        I don’t know any plainer way to put this, and if you can’t see it, we should just stop talking past each other.

                        And to this I somewhat agree, but it’s hard not to respond to you when you claim that your favoured interpretation is a simple truth and further that anyone who takes the other viewpoint is somehow being dishonest - claims that you actually made.

                        1. 1

                          And I strongly disagree with (b), and last time we went round and round on this you seemed to understand why.

                          No, you need to read the post you’ve linked again. I understood what you believed; but re-iterated that I thought it wasn’t right:

                          While that makes the whole argument make a little more sense, I don’t think the text supports this; the actual text does not use the “other part” wording at all (it’s unorthodox to put normative requirements in the “terms” section anyway), and I think my other points still hold: the “permissible undefined behaviour” from C89 specifically includes an option which allows for “unpredictable results”, i.e. the “required behaviour” is not requiring any specific behaviour anyway.

                          You have still never addressed that.

                          it makes no sense whatsoever for the very next paragraph to be a normative list of permissible ways to handle UB

                          … which is why it only makes sense to interpret that next paragraph as a potentially non-exhaustive list of behaviours that, anyway, seems to impose no real requirements since, once more, they allow the compiler to “ignore the situation completely, with unpredictable results”.

                          That last point, you continue again to overlook. I’ve brought it up repeatedly in this thread, and you never address it.

        1. 3

          This is the kind of inspired insanity I can get behind.

          1. 2

            Surprised to see all the love for FastCGI. My recollectoin is that it was a nightmare to use – very fussy (hard to program for and integrate with), and quite brittle (needing regular sysad intervention).

            1. 2

              I remember trying to set it up once on the server side (~10 years ago?) and it was not fun.

              However as a user on shared hosting, it works great. I’ve been running the same FastCGI script for years, and it’s fast, with no problems. So someone figured out how to set it up better than me (which is not surprising).

              I think the core idea is good, but for awhile the implementations were spotty and it was not well documented in general. There seems to be significant confusion about it to this day, even on this thread of domain experts.

              To me the value is to provide the PHP deployment model and concurrency model (stateless/shared nothing/but with caching), but with any language.

              1. 1

                We ran FastCGI at quite large scale back around 2000 and it was very reliable and not particularly difficult to work with.

                1. 1

                  I was using it at mid-scale in the aughts (mod_fastcgi on apache) and it was not a pleasant experience. Maybe our sysads were particularly bad, or maybe our devs just didn’t get the concepts, but I recall others in my local user groups having similar difficulties.

              1. 1

                inb4 topic merged and disappeared from front page

                1. 6

                  That would be a shame because I feel this article has insights I haven’t seen elsewhere.

                  1. 2

                    Yeah, the mods have made it very clear that RMS discussions are currently not welcome on the site.

                    1. 8

                      Yes, it happened with this story that links from different days were merged. As with the merge before it. As with the merge before it. As with the merge before it. As with the merge before it. As with the merge before it. As with… I mean, you take the point, but as with the first merge.

                      This is not mods sneakily putting a thumb on the scale to influence a discussion. This is the way merging has worked, as reiterated the day before this story was posted and 106 of the other 192 merged stories. Merging news has always been central to the design of the feature.

                      Users are not especially divided on this and mod action ran the opposite the way of you imagine. This was flagged 35 times as being off-topic and once as spam, making it the most-flagged story in Lobsters history, and I chose to leave it up anyways. The FSF has been transformational for our profession and I thought maybe it would go OK.

                      There’s also a proper meta thread I’ll add some more thoughts to when I have time away from work, just wanted to leave a note here because you’ve repeated this conspiracy theory many times on-site and in chat this last week, so clearly it’s important to you.

                      1. 3

                        I thought that the merge feature was originally for articles on the same news story, reporting very similar content, not for “everything related to a topic must go into this one place” but I guess I was wrong. Fact is that the story I commented on here was relatively positively received, while the original post that everything was merged into wasn’t. And the content within them is not at all similar. Which means merging effectively disappears the positively received thread from the front page, introduces ridiculous and unnecessary confusion, and reduces the chances of anyone stumbling onto the topic on lobste.rs from reading the positively received and upvoted post and its thread. I would like if from now on all posts about Github are merged into a megathread every week to match the current rule, it’s obviously what the users want.

                        1. 2

                          Eh, I don’t think you’re doing anything all the conspiratorial (or you’d be in a conspiracy of one, if so), just that I think you got tired of seeing people argue about pedophilia and dismayed to see a lot of downvotes on a story, so figured that arguing from prior jurisprudence would be appropriate for this case. I disagree. This case was different than prior ones you’re citing. I think that mergedeleting erased a pretty big story to the point that a lot of people didn’t even see it and confused a lot of different threads into one, effectively silencing the discussion.

                          I think this was big enough news with different enough development that it warranted a few separate threads. I didn’t see newer threads getting downvoted into oblivion nor more pedophilia flamethreads popping up, but doubtlessly you can see things I can’t.

                          1. 1

                            yeah to be clear I don’t think it’s like a conspiracy either lol, I just disagree with the decision

                        2. 7

                          mods have made it very clear

                          That’s a weird way of referring to “users of lobste.rs”.

                          The original post, the ur-text of this entire kerfuffle, was equally upvoted and flagged. The only mod action after that was, as per policy, fold further update posts into that original one.

                          I didn’t flag the original post as off-topic, but it was a borderline decision for me. I did hide it though. I personally don’t think the topic of “tech people behaving badly” is on-topic for lobste.rs.

                          1. 6

                            The users are divided on this.

                            If there was user unanimity, there wouldn’t be need for further action. The Lobsters software would automatically bury a story that had nothing but downvotes.

                            Mergedeleting story after story is a clear moderator policy, not a reflection of user consensus.

                          2. 4

                            They do it on many topics where there all about the same thing. It’s happened to one of my before, too. There were actually several times more articles on the RMS situation than that one IIRC.

                            1. 4

                              Yeah they should just drop the pretense along with the “culture” tag.

                            2. -2

                              boom

                              1. -2

                                lol

                            1. 3

                              Most of these arguments are either incorrect or taste-based. And coming from a competing group it’s not surprising that a lot of it’s negative.

                              RISC-V has extensive academic study and backup. It’s not just a flash in the pan. It’s a well designed, extensively studied architecture. While no architecture is perfect it’s probably the most carefully academically considered architecture of all time.

                              1. 16

                                extensive academic study and backup

                                After having dealt with Scala, this claim has pretty much lost all its meaning to me.

                                Is there any evidence that they are actually looking at the facts and prior art?

                                1. 2

                                  A quick search shows hundreds of publications related to RISC-V:

                                  https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=risc-v&btnG=

                                2. 11

                                  RISC-V has extensive academic study and backup. It’s not just a flash in the pan. It’s a well designed, extensively studied architecture. While no architecture is perfect it’s probably the most carefully academically considered architecture of all time.

                                  Could you give some examples? I don’t design hardware in any capacity, and most of my direct experience writing assembler is with older CPUs (e.g. 680x0 or classic MIPS and ARM), so I can’t judge a lot of the opcode comments. But…the complaints I do understand seem entirely legitimate to me. x86 has a lot of flaws, but honestly has a very good atomics story, and I remember other CPUs (such as PowerPC) being comparatively worse. The minimalism bit it opens up with directly speaks to me; I remember how ARM was a lot more pleasant than MIPS specifically because MIPS was overly minimalistic, so it’d take a surprising number of instructions to get stuff done.

                                  I’m not saying the author isn’t biased, or even that he’s right, but the parts I understand don’t seem incorrect or taste-based

                                  1. 1

                                    This guy picks off a few of the more obvious points:

                                    https://www.reddit.com/r/programming/comments/cixatj/_/ev9z9fp

                                  2. 9

                                    RISC-V has extensive academic study and backup

                                    To rephrase this a bit: RISC-V provides plenty of canvas on which lots of postdocs can paint their careers. Doesn’t mean it’s practically sound.

                                    Funnily, some of the decisions I despise the most about RISC-V (its introduction of the machine and hypervisor modes) are all about imitating (some of the worse aspects of) x86 (that is, “practical” decisions).

                                    1. 1

                                      The hypervisor mode of x86 is a complete mess - which perhaps accounts for its popularity.

                                  1. 13

                                    Standard Forth is far more portable than standard C

                                    1. 1

                                      “Forth is far more portable”? That’s a big claim and I think it’s easily refuted. For instance Forth can’t target really tiny microcontrollers, but C can and regularly does. However I can’t think of any platform Forth can target that C can’t.

                                      For example you can’t run Forth on a PIC16F84A microcontroller - which has 68 bytes of RAM and 1.75KB of flash. C does. No Forth exists which works on a device this small (although there is one which can target the larger PIC16C series and the much larger PIC18+s).

                                      While it’s pretty old now the PIC16F84A was an incredibly popular microcontroller with many millions of them being sold over the years, so it’s not exactly an obscure device.

                                      1. 1

                                        I did regret adding “far” in what I said but for ANS core in general I think it holds.

                                        Sure you can’t run “a Forth” on the PIC, but you don’t really run “a ‘C’” on the PIC, you cross-compile for the PIC. You can write Forth-to-assembler cross-compilers relatively simply (see: avr, cmForth), I don’t see why writing one for the PIC16F84A would be harder than the equivalent C compiler… (in fact, there appears to be one for the PIC12/16 family PicoForth)

                                    1. 5

                                      Linux as a whole still has a hard dependency on GCC, though, right? Does anyone have any insight on what the status of getting it building with clang is? This is the last I saw.

                                      1. 2

                                        One of the reasons to get rid of VLAs is that one of the types of VLA - VLAs in structures - is a gcc language extension not supported by clang.

                                      1. 29

                                        I have friends who work for Red Hat who are Not Happy about this.

                                        My speculation is that clients of Red Hat will see at most slow change. IBM’s not going to toss the cash cow RHEL, and the various cloud software offerings are what they apparently bought it for. However, internally I think we’ll see a massive diaspora of talent as Red Hat becomes IBM-ified. (All claims to the contrary from either company’s PR are of course to be ignored completely. They have to say that, to stave off the employee flight as long as possible.)

                                        Hot take: I wonder what this will mean for SystemD? ;)

                                        1. 8

                                          I’m unfamiliar with IBM’s Linux strategy; why would this mean anything wrt systemd specifically?

                                          1. 5

                                            Nothing, it’s just a play on the (IMHO very wrong) meme that systemd is only as successful as it is because it had RedHat backing.

                                            IBM probably doesn’t even know what systemd is on the “we’re buying a huge company for 20 billion” plane.

                                          2. 6

                                            Employees are rarely excited about being acquired, and let’s face it, history has shown that’s it’s been bad for both customers and employees unless the company being acquired is going out of business.

                                            1. 12

                                              Hot take: I wonder what this will mean for SystemD?

                                              Can it be a hot take if it’s not even a take? This is inquisitive (not argumentative), which is good for discussion but probably bad if your goal was to have an opinion.

                                              1. 3

                                                hot question

                                              2. 5

                                                I’m out of the loop. Could you explain the systemd comment?

                                                1. 14

                                                  systemd was originally written by Lennart Poettering and Kay Sievers who work at Red Hat.

                                                  1. 3

                                                    Is it still maintained by them as part of their jobs at Red Hat?

                                                    1. 5

                                                      Yes

                                                      1. 3

                                                        Lennart Poettering on Twitter this morning (:

                                                        As you all know we never have been fans of portability. It will come at no surprise that in light of the recent developments we will discontinue all non-S/390 ports of systemd very soon now. Please make sure to upgrade to an S/390 system soon. Thank you for understanding.

                                                        1. 1

                                                          Even POWER? ;)

                                                2. 3

                                                  Hot take: I wonder what this will mean for SystemD? ;)

                                                  I’m pretty sure Facebook will keep developing it if nobody else does:

                                                  https://media.ccc.de/v/ASG2018-192-state_of_systemd_facebook

                                                  (disclaimer: I work there, though not on the team that works most with systemd – and this is of course my personal opinion)

                                                1. 50

                                                  I found the article interesting; having presented Go as both horrible and good, it reminds me of C a bit: a “quirky, flawed and enormous success” language. Perhaps it’s no coincidence given the fact that they share some of their designers :)

                                                  However, as someone who wrote some code in both Go and Rust, I couldn’t disagree more with “I think the reasons why Go’s popularity skyrocketed, will apply to Rust.” I think you’re missing one, very important bit: Go is easy to write. It may be stupid, it may be flawed, but you write your code quickly and it gets the job done. Go has succeeded in attracting Python programmers, because it also allowed them to build their programs quickly, with not much effort, and they ran quite a lot faster.

                                                  The barrier of entry to Rust is massive. Yes, there are obvious advantages to code that you’ve already written in it and made compile, but as far as development effort goes, Rust is not the kind of thing you choose if you want a thing done quickly.

                                                  I think Go’s success is more similar to Javascript’s or Python’s rather than Rust’s. It’s easy to pick up and good enough in practice. Rust goes for the opposite: it makes itself harder to learn and use, but for a superior long-term benefit. I don’t think it’ll reach quite the same audience, or popularity.

                                                  1. 19

                                                    +1. I feel like the reasoning in the article is a bit skewed: it considers programming languages to be formal artifacts and compares them on their technical merits. It is a perfectly valid thing to do and the analysis in the article is thoughtful.

                                                    Then it starts making predictions based on the assumption that technical merits of a language define its success, completely missing the wetware side of programming languages. Programming languages are made to be used by both humans and computers, and their human effects can be very subtle: even some stupid little thing like long compilation times or quirky syntax can be disruptive.

                                                    Go is good enough at the machine level (much better than Python/Ruby and the like), at the same time cutting many corners to be easy for humans (simple, minimal and familiar syntax, small number of concepts in the language, simple and unobtrusive type system, low-latency GC, good tooling, very fast compilation times and feedback loop, a simple but effective concurrency model, large and actually useful standard library). Sometimes Go feels almost like cheating: it is full of high-quality implementations of complex things with very simple/minimal/hidden human interfaces (GC, goroutines, the standard library). Go consistently makes it harder for humans to make wrong choices, compared to most other mainstream programming languages (one subtle example: structures are value-types which are copied by default, unlike pass-by-reference craze of Java/Python/Ruby, making unintended sharing harder and even alleviating absence of immutability to some degree).

                                                    Rust is excellent for machines, but its human side is much more uneven than in Go. It is much better than Go in preventing humans from making mistakes in many areas. At the same time, it brings non-trivial, large, open-ended interfaces and does not hide implementation complexity as well from the programmer. It brings huge learning curve and cognitive overhead. Implementation/language complexity can be a minefield in itself: humans might get confused, might miss a simpler way to do something, etc. Rust is designed for very patient and conscientious programmers who are willing to spend time and efforts to get things right. Sadly, this is often not the recipe for success in many parts of the software industry.

                                                    I’d be happy to see a world where Go fills a high-level niche and Rust makes systems foundation.

                                                    1. 5

                                                      I think the trouble with discussions about a language’s “technical merits” is that somewhere along the way some people have lost sight of the purpose of programming languages: to act as an interface to make it easy for programmers to create software. Good languages remove resistance to getting programs written. Bad languages make it harder.

                                                      Go is very good at satisfying a particular niche - making it easy to write software without sacrificing much performance. I’d argue that this is a niche which is in high demand and that explains the popularity of Go.

                                                      Rust has a different niche - minimising memory access errors while providing sophisticated language features and having good performance. The trade-off is that the language is much harder to master than Go and programming is in general more difficult. Rust’s features are all laudable things but given it’s lower popularity it seems like there’s just less demand for languages of this type.

                                                  1. 10

                                                    That means these people must be living on an oasis with their fast compiles, enhanced safety, and nice GUI. All using an extension of a language from the 1970’s.

                                                    Outside ALGOL-like languages, there’s two others built on tech ranging from 1950’s-1980’s whose productivity runs circles around C as shown in second link. Corroborates every other study I’ve ever seen, too. The language family in first link can use C libraries, compile to C, extend C with their macros with generation of legacy C, easily add optimization/security since they’re ideal for compiler writing, and so on. They give you all C’s benefits easily but doing it the other way around is hard. And lets not forget vulnerabilities that come with C code most of the time: vulnerability is something C coder’s often embrace even when it’s unnecessary.

                                                    Embracing vulnerability by default on a desert island means their island will perish due to preventable disaster while others continue on getting through their smaller struggles. Given the amount of legacy and new C, maybe this desert island analogy isn’t fitting well with reality. ;)

                                                    1. 8

                                                      It’s astonishing how Pascal is downplayed in these blog posts. Sometimes with a reference to “why c is not my favourite language”. These blog posts compare today’s c to 1970s Pascal typically. Not a fair comparison (evident when the article makes the point that Pascal does not have function pointers, turbo Pascal/free Pascal do have function pointers, I used to write WinAPI software with freepascal).

                                                      The blog article talking about the simplicity’s of c or writing a c compiler is absurdly wrong. As an evolved language it is complex.

                                                      You mention common lisp, Smalltalk and pascal as alternatives. I would also like to mention Oberon which has lately caught my interest. The Oberon language is a descendent of the original pascal (not Borland style) and has a simple grammer with under 40 production rules. The Oberon system is a very interesting project.

                                                      Anyway, most of these blog posts fall silent on the programming language dynamics with regards to platform effects. That Unix and bsd are c-based has paved the way for C.

                                                      1. 2

                                                        Exactly. They’re false comparisons. If you like Oberon, check out the later language Component Pascal with Blackbox and Astrobe Oberon for embedded.

                                                        Edit: @emys I added links to them since Im on lunch now.

                                                        1. 2

                                                          My current pet project is writing a component pascal compiler actually 😉

                                                          1. 0

                                                            Ain’t that a nice surprise! Although I favored Modula-3, Component Pascal was another that had a nice simplicity vs power tradeoff. What made you choose Component Pascal specifically? Also, when I first searched for it, I found a lot of Russian results. Looked like it was popular over there at least at that time. Have you noticed any such patterns on forums or mailing lists of its few adopters?

                                                            While we’re at it, I’ll throw out an idea that you may or may not be into for that project. The strength of C’s ecosystem and legacy effects mean plugging into it seemlessly with max compatibility is best way to pull more people out of it incrementally. I’ve been suggesting that, at a minimum, new languages keep its data types and calling conventions. Then, make FFI pull it in as automatically as a C program can. Then, ability to export functions to be called by C programs normal way. You get maximum, high-speed compatibility with C libraries that way,

                                                            What would you think of a component Pascal that builds on C compatibility at that low-level but otherwise goes with Wirth tradition of safety, simplicity, and fast compiles?

                                                            1. 2

                                                              I learned programming with Turbo Pascal books from our local library (I used Free Pascal quite early I think back when it had real). I had tinkered with QBasic, C and C++ before, but I wasn’t even a teenager then and learning to program already was a language barrier (native German, when we learned the word if in our english class at school I already knew it from programming books). With Pascal it made click, I could understand the error messages, structure my code, etc. So I guess I’ll always have a soft tooth for Wirthian languages.

                                                              For some time the wish in me is growing to write a compiler or interpreter. In the beginning I contemplated a lisp or a Smalltalk. Then I didn’t want to start writing my own virtual machine and my focus shifted a bit on compiled languages. I remembered pascal and wanted to read up on various pascal dialects, when I stumbled over the Wikipedia articles, read Modula -> Oberon and stumbled over Component Pascal which seemed like a nice idea because it has a concise spec that is available online (Oberon and Modula I think are rather defined in published books that are long out of print). Also I am curious how programming with Oberon feels like, especially the reflection feature and the With-statement seems like a nice low-level substitute for Rust or Haskell style union data types.

                                                              This is not my first try at writing a compiler nor writing an interpreter, before I tinkered with Haskell and Parser Combinators, etc. Always getting lost in the way. This time I want to go the classical style and hand-roll lexer and a recursive descent parser to train my brain to think in terms of production rules. Wirthian languages seem like especially good for that. Not yet decided how I will generate code, probably I’ll use LLVM, but hand-rolling amd64 code OR maybe RISC-V is not off the table. Probably I will make a decision about this when I have an AST. Let’s see how I advance given that I don’t have too much time for side projects currently.

                                                              What would you think of a component Pascal that builds on C compatibility at that low-level but otherwise goes with Wirth tradition of safety, simplicity, and fast compiles?

                                                              Very much my idea of how it should work. imho Pascal was quite similar in that respect (except for the calling convention).

                                                              You mention you like Modula-3 more than the Oberon family. What are your reasons for this? One thing I am missing with Oberon is that there is no concurrency story.

                                                              1. 1

                                                                Thanks for the story. Fascinating! I started with QBasic, too.

                                                                Far as Modula-3, check out its Wikipedia page. Languages like C were small, unsafe, fast, snd harder to do large programs in. The safer languages for large programs like C++ and Java were enormously complex. Slow abstractions for most, too.

                                                                Modula-3 gives you programming in large, concurrency, partly-verified stdlib, still smallish language, and still fast compiles (esp vs C++). I think it just struck great balance among many attributes that conflicted. Main thing it’s missing is Scheme-like macros. A must-have for Modula-4.

                                                                1. 1

                                                                  Thanks for the story. Fascinating! I started with QBasic, too.

                                                                  Far as Modula-3, check out its Wikipedia page. Languages like C were small, unsafe, fast, snd harder to do large programs in. The safer languages for large programs like C++ and Java were enormously complex. Slow abstractions for most, too.

                                                                  Modula-3 gives you programming in large, concurrency, partly-verified stdlib, still smallish language, and still fast compiles (esp vs C++). I think it just struck great balance among many attributes that conflicted. Main thing it’s missing is Scheme-like macros. A must-have for Modula-4.

                                                        2. 3

                                                          …these people must be living on an oasis with their fast compiles, enhanced safety, and nice GUI.

                                                          C also has very fast compiles and these days people have a range of good GUIs available. Safety… not so much.

                                                          …there’s two others built on tech ranging from 1950’s-1980’s whose productivity runs circles around C…

                                                          Notably productivity isn’t mentioned in the article at all. C’s never made great claims to productivity as an application programming language but in my experience it’s not terrible either. lisp and smalltalk may make these claims but they’re also terrible resource hogs compared to C. C’s niche is tight control over the machine and its resources. Each language has its place.

                                                          1. 2

                                                            “C also has very fast compiles and these days people have a range of good GUIs available”

                                                            Like in Turbo Pascal, optimized versions compiled at what seems today an insane speed on devices pre-Pentium. Does C compile that fast on a 100-200Mhz processor or even today’s devices single-threaded? An entire project done before your finger changes places? I used to do that in a BASIC, too. Hit a button, one second passes, and my next iteration is loaded with everything still fully in my head on a P3 400Mhz w/ 128MB RAM.

                                                            “lisp and smalltalk may make these claims but they’re also terrible resource hogs compared to C. “

                                                            Maybe terrible LISP’s. I’ve seen benchmarks where many are quite fast. There’s also been soft, real-time deployments of them. Chez Scheme’s first version started on a Z80 per this account. Most C coders probably would crash that with their bloated apps. Then, there’s an idea I had which I fortunately found had been done in ZL of C and C++ integrated with Scheme to get benefits of both. So, your stated LISP drawbacks and C advantages are nullified by the fact that LISP can do what C can with LISP’s benefits and export the result to C like PreScheme did. Doing that in reverse? Much harder and messier.

                                                            “C’s niche is tight control over the machine and its resources.”

                                                            Quite a few languages can do that. PL/S, Ada, Amiga-E, the safer C’s (eg Cyclone), D, SPARK, Rust, Nim, ZL, ATS, Ivory, Myrddin, and Zig come to mind. Most don’t have the drawbacks of C in average case. Some are more flexible, predictable, optimizable, or safer than C in the low-level cases. Seems like C is most popular and deployed but not strongest in that niche along a number of dimensions.

                                                            1. 3

                                                              Does C compile that fast on a 100-200Mhz processor or even today’s devices single-threaded?

                                                              Turbo Pascal’s great. In the end compilation speed’s an implementation detail and there are plenty of fast C compilers out there. ZapCC comes to mind.

                                                              Most C coders probably would crash that with their bloated apps.

                                                              Ok, that’s just your biases showing. There’s no valid reason to believe that C programmers are inherently creating “bloated apps” when lisp/smalltalk programmers aren’t.

                                                              Correct me if I’m wrong but I believe lisp and smalltalk use garbage collection which makes them inherently less resource efficient than C. Again, that’s not necessarily a bad thing - each language feature has advantages and disadvantages - but memory efficiency isn’t as good for languages which have GC because collection inherently always lags last use to some extent. In C you don’t incur runtime costs for memory lifetime management so on that point alone C is more resource efficient. And there are plenty of other areas where these more abstract languages incur significant overhead where C doesn’t.

                                                              Show me a lisp/smalltalk which can create programs for a PIC16F84 microcontroller. That MCU has only 68 bytes of RAM and 1.75KB of program memory. That’s not 68 KB or 68 MB of RAM. It’s 68 bytes. It’s no big deal to write C programs for such a resource constrained environment. lisp/smalltalk are completely out of the question though.

                                                              At this time you also can’t write a program in PL/S, Ada, Amiga-E, D, SPARK, Rust, Nim, ZL, ATS, Ivory, Myrddin, or Zig for the PIC16F84 for a few reasons but not least because no compilers exist for these languages which can target such resource constrained MCUs. I only quote this MCU because it’s one of the smaller devices I’ve programmed. The same efficient resource handling applies for any other target of course. C remains unparalleled for its precise handling of resources in real world applications.

                                                              Side note: you could probably write a FORTH which would run on the PIC16F84 but that’s about the only other language which could target it, I think. And it’d be a lot slower than C on this underpowered CPU.

                                                        1. 4

                                                          No. Because C is not how your computer works. It’s not how anybody’s computer works, nor has it been in about 3 or 4 decades.

                                                          C is how a PDP-11 works and bears about as much resemblance to an x86-64 chip as it does to ENIAC.

                                                          https://queue.acm.org/detail.cfm?id=3212479

                                                          1. 14

                                                            I don’t think that’s really true.

                                                            The assertion that “C is how a PDP-11 works and bears about as much resemblance to an x86-64 chip as it does to ENIAC” doesn’t really stand up to scrutiny. C is fairly good at providing a window into any machine with a flat memory space and a conventional call stack - ie. segmented memory models aren’t a good fit and machines without stacks aren’t a good fit. x86-64 chips are a good match for this model and C is no better or worse at programming them than it was the PDP-11 it was originally written for.

                                                            1. 6

                                                              C is how a PDP-11 works and bears about as much resemblance to an x86-64 chip as it does to ENIAC.

                                                              C is within epsilon of the distance from x86-64 assembly and x86-64 implementations. The differences that this article talks about are largely present in x86-64 assembly too.

                                                              1. 4

                                                                I’d say this is because “how modern computers work is shaped by how C works”.

                                                            1. 10

                                                              I’m a bit puzzled why the author seems to think that integer wrap on overflow behaviour has something to do with C and undefined behaviour. The same thing happens with nearly all languages which use the processor’s integer arithmetic, because those semantics are provided by the processor itself. Java, C#, etc. all wrap on overflow. There are some exceptions though - Ada provides the “exception on overflow” semantics the author prefers, but it does come with a significant performance penalty because checking for overflow requires additional instructions after every arithmetic operation.

                                                              The point here is that if you want performant arithmetic it’s all about what the processor is designed to do, not anything to do with the languages. Java defines integer wrap as the language’s standard behaviour but as a result it incurs a performance penalty for integer arithmetic on processors which don’t behave this way. C doesn’t incur this penalty because it basically accepts that overflow works however the processor implements it. And let’s face it if your program is reliant on the exact semantics of overflowing numbers you’re probably doing it wrong anyway.

                                                              There are some processors which provide interrupts on integer overflow. This eliminates the performance penalty associated with overflow checks if your language is Ada and so you want to trap on overflow. There are other semantics around too - DSP processors often have “clamp on overflow” instead since that suits the use case better and old Unisys computers use ones complement rather than twos complement so their overflow behaves slightly differently.

                                                              1. 4

                                                                Performance penalty of “trap on overflow” can be reduced by clever modeling, for example by allowing delayed trap instead of immediate trap. As-if Infinitely Ranged is one such model. Immediate trap disallows optimizing a+b-b to a, because if a+b overflows the former traps and the latter doesn’t. Delayed trap allows such optimization.

                                                                1. 3

                                                                  I’m a bit puzzled why the author seems to think that integer wrap on overflow behaviour has something to do with C and undefined behaviour.

                                                                  You are mixing up underlying behaviour of the processor with defined (or un-defined) behaviour of the language. Wrap on integer overflow is indeed the natural behaviour of most common processors, but C doesn’t specify it. The post is saying that some people have argued that wrap-on-overflow should be the defined behaviour of the C language, or at least the implementation-defined behaviour implemented by compilers, and then goes on to provide arguments against that. There is a clear example in the post of where behaviour of a C program doesn’t match that of 2’s complement arithmetic (wrapping).

                                                                  The same thing happens with nearly all languages which use the processor’s integer arithmetic, because those semantics are provided by the processor itself.

                                                                  That’s the point - in C, it doesn’t happen.

                                                                  1. 1

                                                                    I don’t get the point. The advantage of using integer wrap for C on processors that implement integer wrap is that it is high performance, simplifies compilation, has clear semantics, and is the semantics programmers expect. If you want to argue that it should be e.g. trap on overflow, you need to provide a reason more substantive than theoretical compiler optimizations that are shown by hand waving. The argument that it should be “generate code that overflows but pretend you don’t” you needs a stronger justification because the resulting semantics are muddy as hell. I’m actually in favor of a debug mode overflow trap for C but a optimized mode of use processor semantics.

                                                                    1. 1

                                                                      you need to provide a reason more substantive than theoretical compiler optimizations that are shown by hand waving

                                                                      Read the post, then; there are substantive reasons in it. I’m not engaging with you if you’re going to start by misrepresenting reasoned arguments as “hand waving”.

                                                                      1. 0

                                                                        “However, while in many cases there is no benefit for C, the code generation engines and optimisers in compilers are commonly general and could be used for other languages where the same might not be so generally true; “

                                                                        Ok! You think that’s a substantive argument.

                                                                        1. 1

                                                                          You’re making a straw man. What you quoted is part of a much larger post.

                                                                          1. 1

                                                                            That’s not what “straw man” means.

                                                                            1. 1

                                                                              It means that you’re misrepresenting the argument, which you are. I said that the post contained substantive reasons, you picked a particular part and insinuated that I had claimed that that particular part on its own constituted a substantive reason, which I didn’t. And: you said “If you want to argue that it should be e.g. trap on overflow, you need to provide a reason more substantive than theoretical compiler optimizations that are shown by hand waving” but optimisations have nothing very little to do with trapping being a better behaviour than wrapping, and I never claimed they did, other than to the limited extent that trapping potentially allows some optimisations which wrapping does not. But that was not the only reason given for trapping being a preferable behaviour; again, you mis-represented the argument.

                                                                  2. 2

                                                                    I’m a bit puzzled why the author seems to think that integer wrap on overflow behaviour has something to do with C and undefined behaviour.

                                                                    They are related, yes. E.g. whilst signed integer overflow is well defined in most individual hardware architectures (usually as a two’s compliment wrap), it could vary between architectures, and thus C leaves signed integer overflow undefined.

                                                                    1. 1

                                                                      The whole argument is odd.

                                                                    1. 30

                                                                      I enjoyed the author’s previous series of articles on C++, but I found this one pretty vacuous. I think my only advice to readers of this article would be to make up your own mind about which languages to learn and use, or find some other source to help you make up your mind. You very well might wind up agreeing with the OP:

                                                                      Programmers spend a lot of time fighting the borrow checker and other language rules in order to placate the compiler that their code really is safe.

                                                                      But it is not true for a lot of people writing Rust, myself included. Don’t take the above as a fact that must be true. Cognitive overheads come in many shapes and sizes, and not all of them are equal for all people.

                                                                      A better version of this article might have went out and collected evidence, such as examples of actual work done or experience reports or a real comparison of something. It would have been a lot more work, but it wouldn’t have been vacuous and might have actually helped someone answer the question posed by the OP.

                                                                      Both Go and Rust decided to special case their map implementations.

                                                                      Rust did not special case its “map implementation.” Rust, the language, doesn’t have a map.

                                                                      1. 16

                                                                        Hi burntsushi - sorry you did not like it. I spent months before this article asking Rust developers about their experiences where I concentrated on people actually shipping code. I found a lot of frustration among the production programmers, less so among the people who enjoy challenging puzzles. They mostly like the constraints and in fact find it rewarding to fit their code within them. I did not write this sentence without making sure it at least reflected the experience of a lot of people.

                                                                        1. 20

                                                                          I would expect an article on the experience reports of production users to have quite a bit of nuance, but your article is mostly written in a binary style without much room for nuance at all. This does not reflect my understanding of reality at all—not just with Rust but with anything. So it’s kind of hard for me to trust that your characterizations are actually useful.

                                                                          I realize we’re probably at an impasse here and there’s nothing to be done. Personally, I think the style of article you were trying to write is incredibly hard to do so successfully. But there are some pretty glaring errors here, of which lack of nuance and actual evidence are the biggest ones. There’s a lot of certainty expressed in this article on your behalf, which makes me extremely skeptical by nature.

                                                                          (FWIW, I like Rust. I ship Rust code in production, at both my job and in open source. And I am not a huge fan of puzzles, much to the frustration of my wife, who loves them.)

                                                                          1. 4

                                                                            I just wanted to say I thought your article was excellent and well reasoned. A lot of people here seem to find your points controversial but as someone who programs C++ for food, Go for fun and Rust out of interest I thought your assessment was fair.

                                                                            Lobsters (and Hacker News) seem to be very favourable to Rust at the moment and that’s fine. Rust has a lot to offer. However my experience has been similar to yours: the Rust community can sometimes be tiresome and Rust itself can involve a lot of “wrestling with the compiler” as Jonathan Turner himself said. Rust also provides some amazing memory safety features which I think are a great contribution so there are pluses and minuses.

                                                                            Language design is all about trade-offs and I think it’s up to us all to decide what we value in a language. The “one language fits all” evangelists seem to be ignoring that every language has strong points and weak points. There’s no one true language and there never can be since each of the hundreds of language design decisions involved in designing a language sacrifices one benefit in favour of another. It’s all about the trade-offs, and that’s why each language has its place in the world.

                                                                            1. 10

                                                                              I found the article unreasonable because I disagree on two facts: that you can write safe C (and C++), and that you can’t write Rust with fun. Interpreted reasonably (so for example, excluding formally verified C in seL4, etc.), it seems to me people are demonstrably incapable of writing safe C (and C++), and people are demonstrably capable of writing Rust with fun. I am curious about your opinion of these two statements.

                                                                              1. 8

                                                                                I think you’re making a straw man argument here: he never said you can’t have fun with Rust. By changing his statement into an absolute you’ve changed the meaning. What he said was “Rust is not a particularly fun language to use (unless you like puzzles).” That’s obviously a subjective statement of his personal experience so it’s not something you can falsify. And he did say up front “I am very biased towards C++” so it’s not like he was pretending to be impartial or express anything other than his opinion here.

                                                                                Your other point “people are demonstrably incapable writing safe C” is similarly plagued by absolute phrasing. People have demonstrably used unsafe constructs in Rust and created memory safety bugs so if we’re living in a world of such absolute statements then you’d have to admit that the exact same statement applies to Rust.

                                                                                A much more moderate reality is that Rust helps somewhat with one particular class of bugs - which is great. It doesn’t entirely fix the problem because unsafe access is still needed for some things. C++ from C++11 onwards also solves quite a lot (but not all) of the same memory safety issues as long as you choose to avoid the unsafe constructs, just like in Rust.

                                                                                An alternative statement of “people can choose to write safe Rust by avoiding unsafe constructs” is probably matched these days with “people can choose to write safe C++17 by avoiding unsafe constructs”… And that’s pretty much what any decent C++ shop is doing these days.

                                                                                1. 5

                                                                                  somewhat with one particular class of bugs

                                                                                  It helps with several types of bugs that often lead to crashes or code injections in C. We call the collective result of addressing them “memory safety.” The extra ability to prevent classes of temporal errors… easy-to-create, hard-to-find errors in other languages… without a GC was major development. Saying “one class” makes it seem like Rust is knocking out one type of bug instead of piles of them that regularly hit C programs written by experienced coders.

                                                                                  An alternative statement of “people can choose to write safe Rust by avoiding unsafe constructs” is probably matched these days with “people can choose to write safe C++17 by avoiding unsafe constructs”

                                                                                  Maybe. I’m not familiar with C++17 enough to know. I know C++ was built on top of unsafe language with Rust designed ground-up to be safe-as-possible by default. I caution people to look very carefully for ways to do C++17 unsafely before thinking it’s equivalent to what safe Rust is doing.

                                                                          2. 13

                                                                            I agree wholeheartedly. Not sure who the target survey group was for Rust but I’d be interested to better understand the questions posed.

                                                                            Having written a pretty large amount of Rust that now runs in production on some pretty big systems, I don’t find I’m “fighting” the compiler. You might fight it a bit at the beginning in the sense that you’re learning a new language and a new way of thinking. This is much like learning to use Haskell. It isn’t a good or bad thing, it’s simply a different thing.

                                                                            For context for the author - I’ve got 10 years of professional C++ experience at a large software engineering company. Unless you have a considerable amount of legacy C++ to integrate with or an esoteric platform to support, I really don’t see a reason to start a new project in C++. The number of times Rust has saved my bacon in catching a subtle cross-thread variable sharing issue or enforcing some strong requirements around the borrow checker have saved me many hours of debugging.

                                                                            1. 0

                                                                              I really don’t see a reason to start a new project in C++.

                                                                              Here’s one: there’s simply not enough lines of Rust code running in production to convince me to write a big project in it right now. v1.0 was released 3 or 4 years ago; C++ in 1983 or something. I believe you when you tell me Rust solves most memory-safety issues, but there’s a lot more to a language than that. Rust has a lot to prove (and I truly hope that it will, one day).

                                                                              1. 2

                                                                                I got convinced when Rust in Firefox shipped. My use case is Windows GUI application, and if Firefox is okay with Rust, so is my use case. I agree I too would be uncertain if I am doing, say, embedded development.

                                                                                1. 2

                                                                                  That’s fair. To flip that, there’s more than enough lines of C++ running in production and plenty I’ve had to debug that convinces me to never write another line again.

                                                                                  People have different levels of comfort for sure. I’m just done with C++.

                                                                            1. 2

                                                                              Wow. The performance of this thing is nuts. Loving it.

                                                                              1. 4

                                                                                I was interested in what he was saying just up until he said

                                                                                Some may even be lucky enough to find themselves doing Extreme Programming, also known as ‘The Scrum That Actually Works’.

                                                                                My experience with XP was that it was extremely heavyweight and did not work well at all. It created the greatest developer dissatisfaction of any of the versions of Agile I’ve encountered.

                                                                                1. 5

                                                                                  Couldn’t disagree more – the most successful team I was on was heavily into XP. When people say it’s heavyweight, they’re usually talking about pair programming. I’m not sure what people have against it; I’ve found it’s a great way to train junior developers, awesome for tricky problems, and generally a great way to avoid the problem of, “Oh this PR looks fine but redo it because you misunderstood the requirements.”

                                                                                    1. 2

                                                                                      I don’t want to discount your experience, but it sounds like the issues you’ve had with pair programming are more with the odd choices your employer imposed.

                                                                                      Both people have specialized editor configs? Sure, switch off computers or whatever too; no need to work in an unfamiliar environment.

                                                                                      And if one person is significantly less experienced than the other, that person should be at the keyboard more often than not – watching the master at work will largely be useless.

                                                                                  1. 3

                                                                                    Why I like XP over anything else is the focus on development practices rather than business practices. Pairing, TDD, CI, <10 minute builds, WIP, whole team estimation, etc are all used to produce a better product, faster.

                                                                                    The weekly retrospective offers a way to adjust practices that aren’t working and bolster those that are.

                                                                                    1. 2

                                                                                      Agreed 100%. It turned my head a bit when he thought Agile was too prescriptive, but then was considering an even more prescriptive methodology.

                                                                                      1. 1

                                                                                        What was your experience with XP? Also, scrum is heavyweight as well in my experience and doesn’t work excellently in an actually agile environment like a startup. Feels like it could work in corp. though.

                                                                                      1. 2

                                                                                        Nice article thanks. I liked your very rational approach to evaluating the language. Way too much programming language comparison seems more ego driven than fact driven and it’s nice to see it done cheerfully and without team calling.

                                                                                        1. 1

                                                                                          Thanks for the feedback! I’ll be sure to keep that in mind when I go off to write more posts like this

                                                                                        1. 15

                                                                                          It’s interesting to see Bjarne Stroustrup complaining that C++ is getting too complex - from its inception it’s always been one of the harder languages to fully comprehend so it seems unsurprising that it’s becoming even more intractable over time. And I say this as a C++ programmer.

                                                                                          1. 3

                                                                                            I’m always curious about the intended audience with these types of posts. The posts typically paint a straw man picture that there are people unwilling to change the operating model to be more efficient given the option, which is absurd. Should we abandon bitcoin? Is that the thesis here?

                                                                                            Clearly the non technical people would probably not know PoW is inefficient but they also have little to no control over the dominance of bitcoin and the way it works. There are strong economic incentives for actors supporting the current structure to keep supporting it as is and the blog post does not address this problem at all.

                                                                                            1. 7

                                                                                              The cryptocurrency posts themselves paint a strawman that we cant do anything better than corrupt, for-profit, centralized tech unless we switch over to blockchains. That’s a lie with many counterexamples. Bitcoin itself also has huge hype and drawbacks in practice.

                                                                                              The author is highlighting that hype and drawbacks. He’s also highlighting a social phenomenon where many proponents try to talk like bad things are good things to downplay them. I’d straight up call that fraud since they’re trying to get people’s money.

                                                                                              1. 5

                                                                                                I understand that you have a strong opinion on the subject but you’re essentially calling anyone who has an interest in decentralized systems a fraudster. I think it’s disingenuous to say “people who have interests different from my own are by definition fraudsters”.

                                                                                                Decentralized, trustless systems have important applications. Bitcoin was created as a response to the banks being involved in widespread fraud. Calling Bitcoin users frauds seems to miss the point in the largest way possible.

                                                                                                1. 2

                                                                                                  “ Bitcoin was created as a response to the banks being involved in widespread fraud.”

                                                                                                  So were credit unions and non-profits in response to earlier fraud. I don’t see a lot of them involved in things like 2008 crises. I thought even Bitcoin had a non-profit/foundation controlling or supporting it.

                                                                                                  He’s also highlighting a social phenomenon where many proponents try to talk like bad things are good things to downplay them. I’d straight up call that fraud since they’re trying to get people’s money.

                                                                                                  That was the key circumstance that I brought up fraud on. The need to use as much energy as Ireland to avoid unscupulous parties screwing up a few transactions a second is one such implication. It’s a total lie since the regular, banking system prevents or catches lots of stuff like that on a daily basis. From there, I pointed out in another comment that a system using regular databases and protocols with distributed checking might take a $5 VPS or one server per participant. Those don’t take the energy of Ireland, insanely-slow transactions, or crypto magic.

                                                                                                  That the very-smart proponents of Bitcoin don’t investigate such options or tell their potential investors of such alternatives with their risk/reward tradeoffs means they’re more like zealots or con artists. I mean, most people might trust such alternatives since they’re using the regular financial system. They might love solutions that knock out all the real problems they’ve dealt with efficiently plus make plenty of headway on the more rare or hypothetical risks many cryptocurrency proponents worry about all night.

                                                                                                  Save the best for last. If it’s Bitcoin, they might also want to know it’s primarily a volatile, financial instrument used for speculation instead of a stable currency they can depend on as its proponents are selling it. I know people who are day trading these things right now riding the hype waves profitably while the adopters driving them and sustaining the systems aren’t getting the much better thing people probably promised them. Many of them have also lost money they wouldn’t have lost storing currency in traditional, financial system. Looks like fraud to me.

                                                                                                  1. 3

                                                                                                    The need to use as much energy as Ireland to avoid unscupulous parties screwing up a few transactions a second is one such implication. It’s a total lie since the regular, banking system prevents or catches lots of stuff like that on a daily basis. From there, I pointed out in another comment that a system using regular databases and protocols with distributed checking might take a $5 VPS or one server per participant. Those don’t take the energy of Ireland, insanely-slow transactions, or crypto magic.

                                                                                                    It sounds like you might endorse the notion that PayPal is more effective than Bitcoin. PayPal supports more transactions per second, catches a lot of fraud, supports chargebacks when fraud does happen, and doesn’t require proof-of-work – it all runs safely on PayPal’s verified servers. This is all true, and for many people PayPal is fine enough.

                                                                                                    However, the centralized nature of PayPal does have some problems. There’s always the risk of getting your account frozen, which has happened to countless people. Minecraft made too much money in 2010. Wikileaks pissed off powerful entities in 2012. Google has over 600,000 results for “paypal accounts frozen”. I hear that PayPal freezes lots of crowdfunding efforts in particular.

                                                                                                    What it comes down to is trust. If you can trust the corporate entity PayPal to expedite your transactions and send you on your way, then the status quo is fine. But if you have a problem with PayPal, or PayPal has a problem with you, then you need to find an alternative.

                                                                                                    You can see the same problem on a larger scale with the SWIFT network. Nearly every international interbank transfer takes place on SWIFT, and it works fine as long as everyone trusts each other. But if you find yourself on the wrong end of US sanctions, suddenly your banking system comes to a screeching halt. Russia, China, and Iran are all too aware of this problem and are trying to build alternatives. Russia is working on SFPS and China is building CIPS. They’re also both stockpiling gold; another asset that won’t freeze you out at a moment’s notice.

                                                                                                    Bitcoin never freezes anyone out of their funds. If you have the private key, you control the bitcoin wallet, period. It’s math, not bureaucracy.

                                                                                                    1. 4

                                                                                                      “This is all true, and for many people PayPal is fine enough.” “However, the centralized nature of PayPal does have some problems”

                                                                                                      You’re almost there. The centralized solution like PayPal works really well except in well-known failure modes. SWIFT is another good example I bring up myself in these discussions as better than Bitcoin so far. There’s centralized companies, esp credit unions or nonprofits, that aren’t doing all the shady stuff PayPal does. That’s by design. There’s cooperatives leaner than Swift, too. So, the logical, first thing to explore is how to mix those protections with centralized companies like PayPal. If we do decentralized, the first thing to explore should be proven tech for centralized case with distributed checking maybe at a granularity of participating organization like with banks and SWIFT. So, so, so much more efficient to do that.

                                                                                                      Instead, cryptocurrency proponents paint a false dilemma between for-profit, greedy banks vs distributed, energy-sucking, blockchain system. It’s misleading given all the designs in between. Not to mention they seem to only focus on what for-profit, scumbag banks do instead of what centralized organizations designed for public benefit can do. A little weird to sidestep the whole concept of nonprofit, consumer-focused banks or companies, eh? It’s like they want a specific solution ahead of time looking for justifications for it instead of exploring the vast solution space trying to find what works best for most peoples’ goals.

                                                                                                      “Bitcoin never freezes anyone out of their funds. If you have the private key, you control the bitcoin wallet, period. It’s math, not bureaucracy.”

                                                                                                      You’re telling me Bitcoin ledgers, exchanges, and/or hardware can’t be blocked or made a felony in a country. I doubt that. Hell, the mining situation makes it look more like a traditional oligopoly. I can’t remember if they’re all in China or not. That would be even worse given it would be an oligopoly whose companies are under control of one government that’s not about libertarianism and greater good. There’s currently more diverse control and subversion difficulty in traditional, banking system right now if not doing business with banks that are scumbags. I’d avoid any of them on the bailout list to start with.

                                                                                                      1. 2

                                                                                                        Good points all around. On second thought, what you’re describing sounds less like PayPal and more like Ripple.

                                                                                                        In May 2011, [the creators of Ripple] began developing a digital currency system in which transactions were verified by consensus among members of the network, rather than by the mining process used by bitcoin, which relies on blockchain ledgers. This new version of the Ripple system was therefore designed to eliminate bitcoin’s reliance on centralized exchanges, use less electricity than bitcoin, and perform transactions much more quickly than bitcoin.

                                                                                                        It’s targeting the interbank/SWIFT space, and purports to “do for payments what SMTP did for email”.

                                                                                                        1. 2

                                                                                                          Oh yeah, I loved their concept when I looked into this stuff. Interledger was my favorite concept but Ripple stood out, too. Obviously, I have some technical disagreements but they’re going in much smarter direction. Their marketing said you can pay for stuff with quick settlements, multiple parties checking stuff, and none of Bitcoin’s energy problems. The quick settlements in a global system probably being the main, selling point for most customers.

                                                                                              2. 7

                                                                                                there are people unwilling to change the operating model to be more efficient given the option, which is absurd

                                                                                                There are absolutely lots of these people unwilling to change the operating model to be more efficient given the option. This is why I looked for claims from reasonably noteworthy bitcoiners and not random nobodies - though the random nobodies use the same arguments, and quote the noteworthy arguments - and linked and quoted them at length to make it clear that this is not straw but actual arguments they make in real life. This is all real. I’m not sure how I could make that clearer.

                                                                                                1. 1

                                                                                                  I don’t see a quote about choosing PoW over efficient alternatives. All the claims quoted in your post all seem to be something along the lines of “the benefits of proof of work are worth it.” To these claims you respond with the argument that they are “highly questionable to anyone who isn’t already a Bitcoin fan.”

                                                                                                  From my read I’d say you do not address the claim that immutability and a shared transaction concensus is useful with any sort of reasoning or argumentation, just a slew of examples meant to bring doubt in the readers mind. You use terms like “waste” to describe the use of energy, which clearly reveals the a priori and entirely unargued assumption that it is not worth it. A better approach would be to lay down a reasonable framework for analysis and explain the limits of immutability and the price being paid for it within that framework.

                                                                                                  Ultimately, I still don’t quite understand the thesis of this post. Why should the externality of energy expenditure be regulated by the economics driving it (proponents of PoW blockchains) and not governments?

                                                                                              1. 5

                                                                                                Is this legal in Europe? In Australia if not being tracked was considered legally to be a “common law right” it’s not possible to opt out of it.

                                                                                                1. 7

                                                                                                  I think we need to wait and see, as GDPR will go into effect on May 25 and probably a number of practices like this one will be challenged legally. I personally feel this give-your-consent-or-so-long approach is not in the spirit of the law.

                                                                                                  1. 2

                                                                                                    If it’s not legal, they’ll make it legal and sugar-coat it with GDPR in a way that’s impractical or infeasible to the users.

                                                                                                    I hope Facebook users can combat this with addons, but as most users are mobile users, they surely lack the addons or the technical know-how to set it up.

                                                                                                    Just opt out of Facebook already.

                                                                                                    1. 10

                                                                                                      I hope Facebook users can combat this with addons

                                                                                                      At some point, the person being abused has to acknowledge that they are being abused, and choose to walk away.

                                                                                                      1. 3

                                                                                                        Yeah, just opt out. But sadly there are people who, say, expatriated and have no better way to stay in touch with old friends.

                                                                                                        Until a viable replacement comes along, which may never happen, I think it’s a nice hope that they can find a way to concentrate on their use case without all the extra baggage.

                                                                                                        1. 14

                                                                                                          I am an expat.

                                                                                                          I manage to keep in contact with the friends that matter, the same as I did when I didn’t use Facebook in a different state in my home country.

                                                                                                          If they’re actually friends, you find a way, without having some privacy raping mega-corp using every conversation against you.

                                                                                                          1. 3

                                                                                                            Agreed, I don’t buy the argument that Facebook is the only way to keep in touch from afar.

                                                                                                            I’m an expat, and I have regular healthy contact with my friends and loved ones from another continent, sharing photos and videos and prose. I have no Facebook account.

                                                                                                      2. 2

                                                                                                        I hope Facebook users can combat this with addons

                                                                                                        Then this will happen: https://penguindreams.org/blog/discoverying-friend-list-changes-on-facebook-with-python/

                                                                                                        Unfriend Finder was sent a cease and desist order and chose not to fight it. I made my own python script that did the same thing, and ironically, Facebooks changes the fixed the Cambridge Analytica issue broke my plugin. It stopped 3rd parties yes, but it also kept developers from having real API access to our own data.

                                                                                                        I also wrote another post about what I really think is going on with the current Facebook media attention:

                                                                                                        https://fightthefuture.org/article/facebook-politics-and-orwells-24-7-hate/

                                                                                                      3. 1

                                                                                                        You’re not forced to use Facebook. It looks like they’re following GDPR and capturing consent. It seems the biggest issue is the bundling of multiple things into one consent and not letting folks opt in or out individually.

                                                                                                      1. 1

                                                                                                        Is Fedora used much any more?

                                                                                                        1. 2

                                                                                                          Nope. Never heard of such distro.

                                                                                                          1. 2

                                                                                                            I still use it :-) Anecdotally (other users spotted “in the wild”) I think I’d even say usage is increasing in the last few years.