1. 73

    First, their argument for Rust (and against C) because of memory safety implies that they have not done due diligence in finding and fixing such bugs. […] And with my bc, I did my due diligence with memory safety. I fuzzed my bc and eliminated all of the bugs.

    This seems like such a short-sighted, limited view. Software is not bug-free. And ruling out a class of bugs by choice of technology as a measure of improving overall robustness won’t fix everything, but at the very least it’s a trade-off that deserves more thorough analysis than this empty dismissal.

    1. 19

      I think he is probably right (after all, he wrote it) when he says rewriting his bc in Rust would make it more buggy. I disagree this is an empty dismissal, since it is backed by his personal experience.

      For the same reason, I think cryptography developers are probably right (after all, they wrote it) when they say rewriting their software in Rust would make it less buggy. So the author is wrong about this. His argument is not convincing why he knows better than developers.

      1. 15

        I think there’s a big difference between programs and libraries with stable requirements and those that evolve here. The bc utility is basically doing the same thing that it did 20 years ago. It has a spec defined by POSIX and a few extensions. There is little need to modify it other than to fix bugs. It occasionally gets new features, but they’re small incremental changes.

        Any decision to rewrite a project is a trade off between the benefits from fixing the accumulated technical debt and the cost of doing and validating the rewrite. For something stable with little need of future changes, that trade is easy to see: the cost of the rewrite is high, the benefit is low. In terms of rewriting in a memory-safe language, there’s an additional trade between the cost of a memory safety vulnerability and the cost of the rewrite. The cost of Heartbleed in OpenSSL was phenomenal, significantly higher than the cost of rewriting the crypto library. In the cast of bc, the cost of a memory safety bug is pretty negligible.

        Data from Microsoft’s Security Response Center and Google’s Project Zero agree that around 70-75% of vulnerabilities are caused by memory safety bugs. Choosing a language that avoids those by construction means that you can focus your attention on the remaining 25-30% of security-related bugs. The author talks about fuzzing, address sanitiser, and so on. These are great tools. They’re also completely unnecessary in a memory-safe language because they try to find classes of bugs that you cannot introduce in the first place in a memory-safe language (and they do so probabilistically, never guaranteeing that they’ve found them all).

        If you’re starting a new project, then you need a really good reason to start it in C and pay the cost of all of that fuzzing.

        1. 17

          Data from Microsoft’s Security Response Center and Google’s Project Zero agree that around 70-75% of vulnerabilities are caused by memory safety bugs. Choosing a language that avoids those by construction means that you can focus your attention on the remaining 25-30% of security-related bugs.

          There’s an implied assumption here that if a language is memory safe, those memory safe bugs will simply go away. In my experience, that is not quite true. Sometimes those memory safety bugs will turn into logic bugs.

          Not to pick on Rust here, but in Rust it is very common to put values into an array and use array indices instead of pointers when you have some kind of self-referential data structure that’s impossible to express otherwise using rust’s move semantics. If you simply do such a naive transformation of your C algorithm, your code will be memory safe, but all your bugs, use after free, etc, will still be there. You just lifted them to logical bugs.

          Rust has no good abstractions to deal with this problem, there are some attempts but they all have various practical problems.

          Other languages like ATS and F* have abstractions to help with this problem directly, as well as other problems of logical soundness.

          1. 13

            Right - but in lifting these from memory bugs to logic bugs, you get a runtime panic/abort instead of a jump to a (likely-attacker-controllable) address. That’s a very different kind of impact!

            1. 9

              You don’t get a panic if you access the “wrong” array index. The index is still a valid index for the array. Its meaning (allocated slot, free slot, etc), is lost to the type system, though in a more advanced language it need not be. This later leads to data corruption, etc, just like in C.

              1. 7

                It leads to a much safer variant of data corruption though. Instead of corrupting arbitrary memory in c or c++ (like a function pointer, vtable, or return address), you are only corrupting a single variable’s value in allocated and valid and aligned memory (like a single int).

                You would get a panic in rust for every memory corruption bug that could cause arbitrary code execution, which is what matters.

                1. 1

                  This later leads to data corruption, etc, just like in C.

                  Can you expand on this? I had expected the behaviour in Rust to be significantly safer than C here. In C, the data corruption caused by use-after free often allows an attacker to execute arbitrary code.

                  I totally see your point about logical corruption (including things like exposing critical secrets), but I don’t follow that all the way to “just like C”. How would an array index error be exploited in Rust to execute arbitrary code?

                  1. 11

                    I have once written a bytecode interpreter in C++, for a garbage collected scripting language. I implemented my own two-space garbage collector. For performance reasons, I didn’t use malloc() directly, but instead allocated a big enough byte array to host all my things. If I overflew that array, Valgrind could see it. But if I messed up it’s internal structure, no dice. That heap of mine was full of indices and sizes, and I made many mistakes that caused them to be corrupted, or somehow not quite right. And I had no way to tell.

                    I solved this by writing my own custom heap analyser, that examined the byte array and tell me what’s in there. If I see all my “allocated” objects in order, all was well. Often, I would see something was amiss, and I could go and fix the bug. Had I written it in Rust instead, I would have had to write the exact same custom heap analyser. Because Rust wouldn’t have prevented me from putting the wrong values inside my array. It’s perfectly “safe” after all, to write gibberish in that array as long as I don’t overflow it.

                    Now could this particular bug lead to arbitrary code execution? Well, not quite. It would generate wrong results, but it would only execute what my C++/Rust program would normally execute. In this case however, I was implementing a freaking scripting language. The code an attacker could execute wasn’t quite arbitrary, but it came pretty damn close.

                    1. 6

                      The effects of data corruption depend on what the code does with the data. This often means arbitrary code execution, but not always. It’s not a property of C, it’s a property of the code. This doesn’t change when you change the implementation language.

                      Fundamentally there is no semantic difference between a pointer in a C heap and an array index into a Rust array. In fact some sophisticated blog authors that explain this array technique often point out they compile to the exact same assembly code. It’s what the code does with the data that leads to exploitation (or not).

                      Of course Rust has many additional safety advantages compared to C, buffer overflows don’t smash the stack, etc, and using references in Rust if you can is safe. And when using references, there’s a great deal of correlation between Rust’s notion of memory safety and true logic safety. This is good! But many people don’t realise that this safety is predicated on the lack of aliasing. The borrow checker is only a mechanism to enforce this invariant, it’s not an operative abstraction. It’s the lack of aliasing that gets you the safety, not the borrow checker itself. When you give up aliasing, you lose a lot of what Rust can do for you. Virtually everybody understands that if you introduce unsafe pointers, they give up safety, but less people seem to understand that introducing aliasing via otherwise safe mechanism has the same effect. Of course, the program continues to be memory safe in Rust terms, but you lose the strong correlation between memory safety and logic safety that you used to have.

                      Not that there’s anything wrong with this, mind you, it’s just something people need to be aware of, just as they are already aware of the tradeoffs that they make when using unsafe. It does make a projection for the number of bugs that Rust can prevent in practice more difficult, though.

                      1. 6

                        I think this is incorrect. Arbitrary code execution does not mean “can execute an arbitrary part of my program due to a logic bug”, it means “can execute arbitrary code on the host, beyond the code in my program”. Even a rust alias logic bug dies not open up this kind of arbitrary code execution exposure because you can’t alias an int with a function pointer or a vtable or a return address on the stack, like you can in c or c++. You can only alias an int with an int in safe rust, which is an order of magnitude safer and really does eliminate an entire class of vulnerabilities.

                        1. 6

                          I think this is incorrect. Arbitrary code execution does not mean “can execute an arbitrary part of my program due to a logic bug”, it means “can execute arbitrary code on the host, beyond the code in my program”.

                          In the security research world, we usually treat control of the program counter (the aptly named rip on x86-64) as “arbitrary code execution.” You can do a surprising of programming using code that’s already in a process without sending any byte code of your own with return-oriented programming.

                          1. 3

                            But does Rust let you do that here? What does a snippet of Rust code look like that allows attacker-controlled indexing into an array escalate to controlling the program counter?

                            1. 2

                              Surely you agree that “variables changing underfoot” implies “programs flow becomes different from what I expect”. That’s why we use variables, to hold the Turing machine state which influences the next state. A logical use after free means “variables changing underfoot”. You don’t expect a free array slot’s value (perhaps now reallocated) to change based on some remote code, but it does.

                              1. 3

                                Right, but “program flow becomes different from what I expect, but it still must flow only to instruction sequences that the original program encoded” is much much safer than “program flow can be pointed at arbitrary memory, which might not even contain instructions, or might contain user-supplied data”.

                                1. 2

                                  With ROP, the program flow only goes through “instruction sequences that the original program encoded”, and yet ROP is pretty much fatal.

                                  1. 7

                                    ROP is not possible when you index an array wrong in rust, what is your point?

                                    1. 6

                                      And you can’t do rop in safe rust.

                                      1. 2

                                        Maybe not directly within the native code of the program itself, but I think (at least part of) 4ad’s point is that that’s not the only level of abstraction that matters (the memory bug vs. logic bug distinction).

                                        As an example, consider a CPU emulator written entirely in safe Rust that indexes into a u8 array to perform its emulated memory accesses. If you compile an unsafe program to whatever ISA you’re emulating and execute it on your emulator, a bad input could still lead to arbitrary code execution – it’s at the next semantic level up and not at the level of your program itself, but how much does that ultimately matter? (It’s not really terribly different than ROP – attacker-controlled inputs determining what parts of your program get executed.)

                                        That’s admittedly a somewhat “extreme” case, but I don’t think the distinction between programs that do fall into that category and those that don’t is terribly clear. Nearly any program can, if you squint a bit, be viewed essentially as a specialized interpreter for the language of its config file (or command-line flags or whatever else).

                                        1. 2

                                          There’s no distinction here. If your program implements a cpu emulator then your program can execute with no arbitrary code execution at all and still emulate arbitrary code execution on the virtual cpu. If you want the virtual program executing to not possibly execute arbitrary virtual instructions, you need to generate the virtual program’s instructions using a safe language too.

                                          In most cases, though, arbitrary virtual code execution is less dangerous than arbitrary native code execution, though that’s beside the point.

                                          1. 2

                                            So…we agree? My point was basically that attacker-controlled arbitrary code execution can happen at multiple semantic levels – in the emulator or in the emulated program (in my example), and writing the emulator in a safe language only protects against the former, while the latter can really be just as bad.

                                            Though I realize now my example was poorly chosen, so a hopefully better one: even if both the emulator and the emulated program are written in memory-safe languages, if the emulator has a bug due to an array-index use-after-free that causes it to misbehave and incorrectly change the value of some byte of emulated memory, that destroys the safety guarantees of the emulated program and we’re back in arbitrary-badness-land.

                                            1. 1

                                              Sure but this is just as meaningful as talking about a cpu hardware bug that might cause a native safe program to run amok. Technically true but not very useful when evaluating the safe programming language

                              2. 3

                                Right, I agree, and safe rust aliasing that the GP described is not possible to control the program counter arbitrarily.

                              3. 4

                                Yeah exactly, this is the part I thought @4ad was arguing was possible. Eg. in C, use-after-free often allows me to make the program start interpreting attacker-provided data as machine code. I thought this is what 4ad was saying was also possible in Rust, but I don’t think that’s what they are claiming now.

                                To me, that’s a big difference. Restricting the possible actions of a program to only those APIs and activities the original code includes, vs C where any machine code can be injected in this same scenario, is a major reduction in attack surface, to me.

                                1. 4

                                  One thing to note is that code is data and data is code, in a true, hard-mathematical sense.

                                  The set of

                                  the possible actions of a program to only those APIs and activities the original code includes,

                                  and

                                  C where any machine code can be injected in this same scenario

                                  is exactly the same (unbounded!). Of course it is much easier in practice to effect desired behavior when you can inject shell code into programs, but that’s hardly required. You don’t need to inject code with ROP either (of course ROP itself is not possible in Rust because of other mitigations, this is just an example).

                                  Please note that in no way I am suggesting that Rust is doing anything bad here. Rust is raising the bar, which is great. I want the bar raised even higher, and we know for a fact that this is possible today both in theory and practice. Until we raise the bar, I want people to understand why we need to raise the bar.

                                  At the end of a day you either are type safe or you aren’t. Of course the specifics of what happens when you aren’t type safe depend on the language!

                                  PS: arrays can contain other things than integers, e.g. they can contain function pointers. Of course you can’t confuse an int with a function pointer, but using the wrong function pointer is pretty catastrophic.

                                  1. 3

                                    is exactly the same (unbounded!).

                                    I guess this is what I don’t understand, sorry for being dense. Can you show a concrete code example?

                                    In my mind I see a program like this:

                                    
                                    enum Action {
                                        GENERATE_USER_WEEKLY_REPORT,
                                        GENERATE_USER_DAILY_REPORT,
                                        LAUNCH_NUCLEAR_MISSILES
                                    }
                                    
                                    impl Action {
                                      pub fn run(&self) {
                                        ...
                                      }
                                    }
                                    
                                    // Remember to remove the nuclear missile action before calling!
                                    fn exploitable( my_actions:  &Vec<Box<Action>>, user_controlled: usize ) {
                                      my_actions[user_controlled].run();
                                    }
                                    
                                    

                                    In my mind, there are two differences between this code in Rust and similar code in C:

                                    1. This only allows the user to launch nuclear missiles; it does not allow them to, say, write to the harddrive or make network calls (unless one of the actions contained code that did that ofc); in C, I’d likely be able to make something like this call any system function I wanted to, whether machine code to do that was present in the original binary or not.

                                    2. In Rust, this doesn’t allow arbitrary control flow, I can’t make this jump to any function in the binary, I can only trick it into running the wrong Action; in C, I can call run on any arbitrary object anywhere in the heap.

                                    ie. in C, this would let me execute anything in the binary, while in Rust it still has to abide by the control flow of the original program, I thought was the case, anyway.

                                    I think you’re saying this is wrong, can you explain how/why and maybe show a code example if you can spare the time?

                                    1. 4

                                      This is correct and 4ad is mistaken. I’m not sure why 4ad believes the two are equivalent; they aren’t.

                                    2. 3

                                      “is exactly the same”

                                      It simply isn’t, and I’m not sure why you think it is.

                                2. 1

                                  In fact some sophisticated blog authors that explain this array technique often point out they compile to the exact same assembly code.

                                  Do you have any links on this that you recommend?

                        2. 2

                          Good analysis. You didn’t use the words, but this is a great description of the distinction between stocks and flows: https://en.wikipedia.org/wiki/Stock_and_flow. I wish more people talking about software paid attention to it.

                          1. 2

                            Author here.

                            I would also argue that crypto should not change often, like bc. You might add ciphers, or deprecate old ones, but once a cipher is written and tested, there should be very little need for it to change. In my opinion.

                          2. 8

                            For the same reason, I think cryptography developers are probably right (after all, they wrote it) when they say rewriting their software in Rust would make it less buggy.

                            Have they actually rewrote anything? Or have they instead selected a different crypto library they trust better than the previous one? On the one hand, Rust has no advantage over C in this particular context. On the other hand, they may have other reasons to trust the Rust library better than the C one. Maybe it’s better tested, or more widely used, or audited by more reputable companies.

                            If I take your word for it however, I have to disagree. Rewriting a cryptographic library in Rust is more likely to introduce new bugs, than it is to fix bugs that haven’t already been found and fixed in the C code. I do think however that the risk is slim, if they take care to also port the entire test suite as well.

                            1. 7

                              In the Cryptography case isn’t the Rust addition some ASN.1 parsing code? This is cryptography adjacent but very much not the kind of different that your point about cryptography code makes. Parsing code unless it is very trivial and maybe not even then tends to be some of the more dangerous code you can write. In this particular case Rust is likely a large improvement in both ergonomics for the parsing as well as safety.

                              1. 1

                                You’ve got a point. I can weaken it somewhat, but not entirely eliminate it.

                                I don’t consider ASN.1 “modern”. It’s over complicated for no good reason. Certificates can be much, much simpler than that: at each level, you have a public key, ID & expiration date, a certificate of the CA, and a signature from the CA. Just put them all in binary blobs, and the only thing left to parse are the ID & expiration date, which can be left to the application. And if the ID is an URL, and the expiration date is an 64-bit int representing seconds from epoch, there won’t be much parsing to do… Simply put, parsing certificate can be “very trivial”.

                                Another angle is that if you need ASN.1 certificates, then you are almost certainly using TLS, so you’re probably in a context where you can afford the reduced portability of a safer language. Do use the safer language in this case.

                                Yet another angle is that in practice, we can separate the parsing code from the rest of the cryptographic library. In my opinion, parsing of certificate formats do not belong to a low-level cryptographic library. In general, I believe the whole thing should be organised in tiers:

                                • At the lowest level, you have the implementation of the cryptographic primitives.
                                • Just above that, you have constructions: authenticated encryption, authenticated key exchange, PAKE…
                                • Higher up still, you have file format, network packet formats, and certificates. They can (and should) still be trivial enough that even C can be trusted with them. They can still be implemented with zero dependencies, so C’s portability can still be a win. Though at that level, you probably have an idea of the target platforms, making portability less of a problem.
                                • Higher up still is interfacing with the actual system: getting random numbers, talking to the file system, actually sending & receiving network packets… At that level, you definitely know which set of platforms you are targetting, and memory management & concurrency start becoming real issues. At that point you should seriously consider switching to a non-C, safer language.
                                • At the highest level (the application), you should have switched away from C in almost all cases.
                            2. 2

                              For the same reason, I think cryptography developers are probably right (after all, they wrote it) when they say rewriting their software in Rust would make it less buggy. So the author is wrong about this. His argument is not convincing why he knows better than developers.

                              This is a fair point. When it comes down to it, whether I am right or wrong about it will only be seen in the consequences of the decision that they made.

                            3. 14

                              Here’s the more thorough analysis you’re asking for: this is cryptographic code we’re talking about. Many assumptions that would be reasonable for application code simply does not apply here:

                              • Cryptographic code is pathologically straight-line, with very few branches.
                              • Cryptographic code has pathologically simple allocation patterns. It often avoids heap allocation altogether.
                              • Cryptographic code is pathogenically easy to test, because it is generally constant time: we can test all code paths by covering all possible input & output lengths. If it passes the sanitizers & valgrind under those conditions, it is almost certainly correct (with very few exceptions).

                              I wrote a crypto library, and the worst bug it ever had wasn’t caused by C, but by a logic error that would have happened even in Haskell. What little undefined behaviour it did have didn’t have any visible effect on the generated code.

                              Assuming you have a proper test suite (that tests all input & output lengths), and run that test suite with sanitisers & Valgrind, the kind of bug Rust fixes won’t occur in your cryptographic C code to begin with. There is therefore no practical advantage, in this particular case to using Rust over C. Especially when the target language is Python: you have to write bindings anyway, so you can’t really take advantage of Rust’s better APIs.

                              1. 2

                                These bugs still occur in critical software frequently. It is more difficult and time consuming to do all of the things you proposed than it is to use a safer language (in my opinion), and the safer language guarantees more than your suggestions would. And there’s also no risk of someone forgetting to run those things.

                                1. 6

                                  These bugs still occur in critical software frequently.

                                  Yes they do. I was specifically talking about one particular kind of critical software: cryptographic code. It’s a very narrow niche.

                                  It is more difficult and time consuming to do all of the things you proposed than it is to use a safer language (in my opinion)

                                  In my 4 years of first hand experience writing cryptographic code, it’s really not. Rust needs the same test suite as C does, and turning on the sanitizers (or Valgrind) on this test suite is a command line away. The real advantage of Rust lies in its safer API (where you can give bounded buffers instead of raw pointers). Also, the rest of the application will almost certainly be much safer if it’s written in Rust instead of C.

                                  And there’s also no risk of someone forgetting to run those things.

                                  Someone who might forget those things has no business writing cryptographic code at all yet, be it in C or in Rust. (Note: when I started out, I had no business writing cryptographic code either. It took over 6 months of people findings bugs and me learning to write a better test suite before I could reasonably say my code was “production worthy”.)

                                  1. 6

                                    Rusts advantage goes much further than at the api boundary, but again the discussion should be around how to get safer languages more widely used (ergonomics, platform support) and not around “super careful programmers who have perfect test suites and flawless build pipelines don’t need safer languages”. To me it is like saying “super careful contractors with perfect tools don’t need safety gear”, except if you make a mistake in crypto code, you hurt more than just yourself. Why leave that up to human fallability?

                                    1. 4

                                      Rusts advantage goes much further than at the api boundary

                                      Yes it does. In almost all domains. I’m talking about modern cryptographic code.

                                      again the discussion should be around how to get safer languages more widely used (ergonomics, platform support)

                                      Write a spec. A formal one if possible. Then implement that spec for more platforms. Convincing projects to Rewrite It In Rust may work as a way to coerce people into supporting more platforms, but it also antagonises users who just get non-working software; such a strategy may not be optimal.

                                      not around “super careful programmers who have perfect test suites and flawless build pipelines don’t need safer languages”.

                                      You’re not hearing me. I’m not talking in general, I’m talking about the specific case of cryptographic code (I know, I’m repeating myself.)

                                      • In this specific case, the amount of care required to write correct C code is the same as the amount of care required to write Rust code.
                                      • In this specific case, Rust is not safer.
                                      • In this specific case, you need that perfect test suite. In either language.
                                      • In this specific case, you can write that perfect test suite. In either language.

                                      except if you make a mistake in crypto code, you hurt more than just yourself. Why leave that up to human fallability?

                                      I really don’t. I root out potential mistakes by expanding my test suite as soon as I learn about a new class of bugs. And as it happens, I am painfully aware of the mistakes I made. One of them was even a critical vulnerability. And you know what? Rust wouldn’t have saved me.

                                      Here are the bugs that Rust would have prevented:

                                      • An integer overflow that makes elliptic curves unusable on 16-bit platforms. Inconvenient, but (i) it’s not a vulnerability, and (ii) Monocypher’s elliptic curve code is poorly suited to 16-bit platforms (where I recommend C25519 instead).
                                      • An instance of undefined behaviour the sanitizers didn’t catch, that generated correct code on the compilers I could test. (Note that TweetNaCl itself also have a couple instances of undefined behaviour, which to my knowledge never caused anyone any problem so far. Undefined behaviour is unclean, but it’s not always a death sentence.)
                                      • A failure to compile code that relied on conditional compilation. I expect Rust has better ways than #ifdef, though I don’t actually know.

                                      Here are the bugs that Rust would not have prevented:

                                      • Failure to wipe internal buffers (a “best effort” attempt to erase secrets from the computer’s RAM).
                                      • A critical vulnerability where fake signatures are accepted as if they were genuine.

                                      Lesson learned: in this specific case, Rust would have prevented the unimportant bugs, and would have let the important ones slip through the cracks.

                                      1. 8

                                        I’m talking about modern cryptographic code.

                                        In this discussion, I think it is important to remind that cryptography developers are explicitly and intentionally not writing modern cryptographic code. One thing they want to use Rust on is ASN.1 parsing. Modern cryptographic practice is that you shouldn’t use ASN.1 and they are right. Implementing ASN.1 in Rust is also right.

                                        1. 4

                                          I’m talking about modern cryptographic code.

                                          So am I.

                                          In this specific case, the amount of care required to write correct C code is the same as the amount of care required to write Rust code.

                                          I disagree.

                                          In this specific case, Rust is not safer.

                                          I disagree here too.

                                          In this specific case, you need that perfect test suite. In either language.

                                          I partially agree. There is no such thing as a perfect test suite. A good crypto implementation should have a comprehensive test suite, of course, no matter the language. But that still isn’t as good as preventing these classes of bugs at compile time.

                                          Rust wouldn’t have saved me.

                                          Not really the point. Regardless of how lucky or skilled you are that there are no known critical vulnerabilities in these categories in your code, that disregards both unknown vulnerabilities in your code, and vulnerabilities in other people’s code as well. A safe language catches all three and scales; your method catches only one and doesn’t scale.

                                          1. 1

                                            Note that I did go the extra mile and went a bit further than Valgrind & the sanitisers. I also happen to run Monocypher’s test suite under the TIS interpreter, and more recently TIS-CI (from TrustInSoft). Those things guarantee that they’ll catch any and all undefined behaviour, and they found a couple bugs the sanitisers didn’t.

                                            that disregards both unknown vulnerabilities in your code

                                            After that level of testing and a successful third party audit, I am confident there are none left.

                                            and vulnerabilities in other people’s code as well

                                            There is no such code. I have zero dependencies. Not even the standard library. The only thing I have to fear now is a compiler bug.

                                            your method catches only one and doesn’t scale.

                                            I went out of my way not to scale. Yet another peculiarity of modern cryptographic code, is that I don’t have to scale.

                                            1. 1

                                              There is no such code.

                                              Sure there is. Other people write cryptographic code too. Unless you are here just arguing against safe languages for only this single project? Because it seemed like a broader statement originally.

                                              I went out of my way not to scale.

                                              I mean scale as in other developers also writing cryptographic software, not scale as in your software scaling up.

                                              1. 1

                                                Sure there is. Other people write cryptographic code too. Unless you are here just arguing against safe languages for only this single project

                                                I was talking about Monocypher specifically. Other projects do have dependencies, and any project that would use Monocypher almost certainly has dependencies, starting with system calls.

                                                I mean scale as in other developers also writing cryptographic software, not scale as in your software scaling up.

                                                Fair enough. I was thinking from the project’s point of view: a given project only need one crypto library. A greenfield project can ditch backward compatibility and use a modern crypto library, which can be very small (or formally verified).

                                                Yes, other people write cryptographic code. I myself added my own to this ever growing pile because I was unsatisfied with what we had (not even Libsodium was enough for me: too big, not easy to deploy). And the number of bugs in Monocypher + Libsodium is certainly higher than the number of bugs in Libsodium alone. No doubt about that.

                                                Another reason why crypto libraries written in unsafe languages don’t scale, is the reputation game: it doesn’t matter how rigorously tested or verified my library is, if you don’t know it. And know it you cannot, unless you’re more knowledgeable than I am, and bother to audit my work yourself, which is prohibitively expensive. So in practice, you have to fall back to reputation and external signs: what other people say, the state of documentation, the security track record, issues from the bug tracker…

                                2. 6

                                  This made me twitch!

                                  Why make a choice which prevents an entire class of bugs when you could simply put in extra time and effort to make sure you catch and fix them all?

                                  Why lock your doors when you can simply stand guard in front of them all night with a baseball bat?

                                  While personally would back the cryptography devs’ decision here, I think there is a legitimate discussion to be had around whether breaking compatibility for some long-standing users is the right thing to do. This post isn’t contributing well to that discussion.

                                1. 4

                                  My understanding is that the POSIX shell spec isn’t nearly detailed enough to define a complete shell, so the promise of “if it runs on mrsh, it’s portable” seems dubious.

                                  1. 10

                                    IMO to be strictly POSIX compliant, you’d have to randomly choose between ambiguous interpretations so that you can’t rely on a specific behavior.

                                    1. 2

                                      This would be wonderfully chaotic! I can imagine the docs now:

                                      Feature foo does either thing x or, if you’re unlucky, another slightly different thing y.

                                      1. 4

                                        It is not like this hasn’t ever been done before.

                                        1. 1

                                          Wow that’s simultaneously hilarious and disturbing. I suppose it gets the point across…

                                    2. 6

                                      My understanding (I could very well be wrong!) is that POSIX sh is the “minimum bar” of a shell – it’s quite usable by itself (as long as you don’t need fancy features like, gosh, arrays), and every modern shell strives to be compatible with it. In my experience, anything that’s POSIX-sh-compliant will run properly on dash, bash, mksh, etc.

                                      1. 4

                                        I think that’s more of the result of these shells being tested against each other.

                                        1. 4

                                          In practice I find that the biggest problem is compatibility of shell utilities, rather than the shell itself. The POSIX shell utilities are really bare-bones (and some are outright missing, like stat) and it can be rather tricky to do things sticking to just the POSIX flags. It’s really easy for various extensions to sneak in even when you intend to write a fully compatible script (and arguably, often that’s okay too, depending on what you’re doing and who the intended audience is).

                                          I appreciate that POSIX is intended as a “minimum bar” kind of specification, but sometimes I wish it would be just a little less conservative.

                                        2. 2

                                          Strong https://xkcd.com/1312/ vibes

                                        1. 3

                                          Log output should be included in your unit tests (it is one of the affects of your function, right?).

                                          I disagree with this. That seems like it will get very close to testing the implementation quickly, and generally seems too far on the painful side of the testing spectrum.

                                          (I’m sure there’s exceptions where it does make sense because you have some kind of contract with log consumers, e.g. if you have alerting tied to specific log messages, though in that particular case I’d probably prefer metrics.)

                                          The one thing I’ve found useful is selected log analysis in integration tests. Collect service logs during integration tests, and have a post-test check that has some kind of modifiable whitelist/blacklist. I.e., by default an error level log causes a test failure, but then you might allow certain specific errors. It can also be good trade-off to wait for certain info log lines to appear when trying to set up the correct pre-conditions.

                                          1. 1

                                            I agree, log output should not be included in unit tests, although like you, I can see a tiny bit of value in this. At $JOB, all our logs have a unique id associated with them [1], and they look like “BAR0025: host=db.example.com failure='Network dropped connection because of reset'”. A unit test of logging messages could check that it has a unique tag (or it hasn’t changed).

                                            [1] Why? To make it easier to search for particular log messages. They don’t have to be full UUIDs, just something that is unique enough for the organization.

                                            1. 1

                                              This was probably poorly phrased. IMO, you should test that the expected logging occurs, when expected. Not the format of the logs (the logger implementations I use for testing inevitably log a very different format than the loggers used in production).

                                              Do you still disagree with this approach?

                                              1. 1

                                                Generally still disagree, yes. Outside particular circumstances, I don’t see logging as relevant to the purpose of the code, hence wouldn’t want to tie that part of the behaviour down by tests. Just like I wouldn’t want to test any number of other tangential side-effects. Don’t have particularly good analogies, but say I know some function will open a certain number of sockets; that’s observable behaviour, but I don’t want to ensure it doesn’t change.

                                            1. 2

                                              I can see multiple issues with this suggestion.

                                              First off, comments marked/flagged “tangential” are automatically second-class, because users can set these to hidden.

                                              This creates a conflict of interest - most commenters would not set their own comments to this status, because it would automatically limit their audience. Were the power to set this status assigned to the community, it could be used as a “soft” disapproval flag. Same if the power was reserved to the mods.

                                              Add a second separate comment section below the main comment. Top-level comments flagged tangential just get moved there. Nested comments in the top section that are flagged tangential get uprooted and moved there, with a link back to parent.

                                              (my emphasis)

                                              This sounds like a UI nightmare to me. What happens if a nested comment is flagged tangential, but its reply isn’t? Suddenly there’s a gaping hole in the comment thread which one has to jump backwards and forwards between sections to fill.

                                              Lastly I’m not a fan of the binary nature of this essentially arbitrary classification. Part of the participants in a discussions about the shortcoming of C will see a suggestion of using Rust instead to be tangential and disruptive, but one can argue with equal validity that the RIIR approach is the “right” one. Deeming one part to be “tangential” therefore just devolves into a popularity/majority discussion.

                                              1. 2

                                                Deeming one part to be “tangential” therefore just devolves into a popularity/majority discussion.

                                                Isn’t it always thus?

                                                To me this is an interesting possibility to explore that breaks the typical approach to self-policing a community: downvotes and pile-ons.

                                                1. 1

                                                  This sounds like a UI nightmare to me. What happens if a nested comment is flagged tangential, but its reply isn’t? Suddenly there’s a gaping hole in the comment thread which one has to jump backwards and forwards between sections to fill.

                                                  No. The whole subtree moves.

                                                1. 1

                                                  Some things I haven’t really figured out so far is:

                                                  • How widely is the cache shared? From a bit of documentation reading and experimenting, I think it’s keyed on the target directory path (unless you specify something extra like a cache id), so a second python project with a similar Dockerfile structure would use the same cache.
                                                  • How large can these caches get? Say you cache pip downloads as indicated here, maybe across various projects and over a longer time; that could make the cache grow to a significant size. Could that cause issues? How likely is it to run into a situation where the gets wiped frequently? (I suppose with caches mounted into the build container instead of being copied, there’s no particular issues with caches getting too large to be helpful – it came to my mind because that is a concern with the regular CI-provider caches, where saving and loading the cache can take more time than it saves otherwise.)
                                                  1. 1
                                                    1. I think you can set an ID to ensure cache identity works the way you want.

                                                    2. Apparently they’ve started allowing exporting caches in order to integrate with CI. You can do so manually with docker buildx --cache-to/--cache-from in Docker 20.10, and the Docker GitHub Actions lets you do that. And it seem slike it’s the whole cache, including layers, that gets exported? And then it does indeed get too big so they’re talking about additional features to fix this.

                                                  1. 5

                                                    Good article! I wish folks would take it for what it is instead of going off on yes/no generics discussion… The big value in the change here is in the library interface, while the article focuses more on the matching implementation changes. This doesn’t appear to be an article trying to sell generics to the critical internet public (for that, a diff of the public interface should see more focus), it’s a worked example of how coding towards a generic interface works.

                                                    Reading the diff, I was wondering whether there might be a good way to avoid the empty-related parts of the ring buffer. E.g. could you make the buffer work on *T and just check against nil? Or use a type struct { value T, empty bool } internal to the buffer implementation? (am curious now how that would approach would work out concretely with generics)

                                                    1. 4

                                                      Thanks for reading! I think your observation about storing *T vs T in the buffer would work. I avoided the pointer approach here because I assumed at the outset that the buffer would still need to be exposed as part of the API, but (thankfully) that turned out to not be the case. Had it been, creating a pubsub for pointers to a struct S (e.g. PubSub[*S]), which is quite common, would mean the user would have to manipulate a buffer of pointers to pointers to S (e.g. Buffer[**S]) - not good! If/when generics move beyond the prototype phase, I plan to re-examine this package and fiddle around with different layouts for the buffer and cell types to understand how generics impact performance.

                                                    1. 7

                                                      I don’t want to 💩on the author’s writeup here, because it is a decent one. I’m using it to launch another public objection to Go Generics.

                                                      A lot of proposals for and write ups about Go Generics seem to miss that there’s a very large group of Go users who object to Generics, and for good reason. It’s not because this group questions the efficacy of generics in solving very specific problems very well – objectors are generally well attuned to Generics’ utility. What’s objected to is the necessity of Generics. The question that we pose is do we need generics at all? Are the problems that Generics solve so important that Generics should pervade the language?

                                                      From the author’s conclusion

                                                      I was able to solve a problem in a way that was not previously possible.

                                                      Being able to solve problems in new ways isn’t always valuable; it can even be counter-productive.

                                                      1. 24

                                                        Nothing is necessary except an assembler. Well, you don’t even need the assembler, you can just flip the bits yourself.

                                                        Go has an expressiveness gap. It has some kind of big classes of algorithms that can’t be made into libraries in a useful way. Most people advocate just rewriting basically the same code over and over forever, which is kind of crazy and error-prone. Other people advocate code-generation tools with go generate, which is totally crazy and error-prone, even with the decent AST tools in the stdlib. Generics close the gap pretty well, they’re not insanely complex, and people have had decades to get used to them. If you don’t want to use them yourself, don’t use them, but accept that there are people for whom, say, the ability to just go get a red-black tree implementation that they can use with a datatype of their own choosing, without loss of type-safety or performance, will greatly improve the usefulness of the language.

                                                        Plus, from a purely aesthetic standpoint, it always seemed criminal to me to have a language that has first-class functions, and lexical closure, but in which you can’t even write map because its type is inexpressible.

                                                        1. 9

                                                          Go has an expressiveness gap.

                                                          That’s true. You’ve identified some of the costs. Can you identify some of the benefits, too?

                                                          1. 12

                                                            Easy: not having a feature protects you from bright idiots that would misuse it.

                                                            Honestly though, that’s the only argument I can make against generic. And it’s not even valid, because you could say this about almost any feature. It’s a fully general counter argument: give people hammers, some will whack each other’s heads instead of hitting nails.

                                                            Assuming basic competency of the users and assuming they were designed from the ground up, generics have practically no downsides. They provide huge benefits at almost no marginal cost. There is a sizeable up-front cost for the language designer and the compiler writer, but they were willing to pay that kind of price when they set out to build a general purpose languages, didn’t they?

                                                            1. 2

                                                              They provide huge benefits at almost no marginal cost.

                                                              If this huge benefit is only one in a minor part of the project, or even, in a minority of projects, then it has to be balanced and thought through.

                                                              Right now, I don’t know many people that work Go daily, telling me that not having generics makes their day a pain.

                                                              Most of them told me that it’s sometimes painful, but that’s actually pretty rare.

                                                              There is a sizeable up-front cost for the language designer and the compiler writer, but they were willing to pay that kind of price when they set out to build a general purpose languages, didn’t they?

                                                              Is the burden really on them? To me the it is on the program writer.

                                                              1. 8

                                                                There’s likely a survivorship bias going on there.

                                                                I used Go as a programming language for my side projects for years. The thing that finally got me to give it up was the lack of generics. In writing PISC, the way I had approached it in Go ended up causing a lot of boilerplate for binding functions.

                                                                Go is something I’d happily write for pay, but I prefer expressiveness for my side projects now, as the amount of effort that goes into a side project is a big determining factor in how much I can do in one

                                                                1. 3

                                                                  There is a sizeable up-front cost for the language designer and the compiler writer, but they were willing to pay that kind of price when they set out to build a general purpose languages, didn’t they?

                                                                  Is the burden really on them? To me the it is on the program writer.

                                                                  Assuming we are a collaborative species (we mostly are, with lots of exceptions), then one of our goals should be minimizing total cost. Either because we want to spend our time doing something else, or because we want to program even more stuff.

                                                                  For a moderately popular programming language, the users will far outnumber and outproduce the maintainers of the language themselves. At the same time, the languages maintainers’ work have a disproportionate impact on everyone else. To such a ludicrous extent in fact that it might be worth spending months on a feature that would save users a few seconds per day. Like compilation speed.

                                                                  Other stuff like generic will affect fewer users, but (i) it will affect them in a far bigger way than shaving off a few seconds of compilation time would have, and (ii) those particular users tend to be library writers, and as such they will have a significant impact on the rest of the community.

                                                                  So yes, the burden really is on the language creators and compiler writers.


                                                                  Note that the same reasoning applies when you write more mundane software, like a train reservation system. While there is rarely any monetary incentive to make that kind of thing not only rock solid, but fast and easy to work with, there is a moral imperative not to inflict misery upon your users.

                                                              2. 5

                                                                I haven’t used Go in anger but here are some benefits from not including generics.

                                                                • Generics are sometimes overused, e.g. many C++ libraries.
                                                                • The type system is simpler.
                                                                • The compiler is easier to implement and high quality error messages are easier to produce.
                                                                • The absence of generics encourages developers to use pre-existing data structures.
                                                              3. 2

                                                                If red-black trees and map were just built in to Go, wouldn’t that solve 90% of the problem, for all practical purposes?

                                                                What I really miss in Go is not generics, but something that solves the same problems as multiple dispatch and operator overloading.

                                                                1. 3

                                                                  Sort of, but no. There’s too many data structures, and too many useful higher-order functions, to make them all part of the language. I was just throwing out examples, but literally just a red-black tree and map wouldn’t solve 90% of the problem. Maybe 2%. Everyone has their own needs, and Go is supposed to be a small language.

                                                                  1. 1

                                                                    Data structures and higher-order functions can already be implemented in Go, though, just not by using generics as part of the language.

                                                              4. 15

                                                                Technically Go does have generics, they just aren’t exposed to the end developer, except in the form of the builtin map and array types, and are only allowed for internal developers. So in a sense, Go does need generics and they already pervade the language.

                                                                I don’t personally have a horse in this race and don’t work with Go, but from a language-design perspective it does seem strange to limit user-developed code in such a way. I’d be curious what your thoughts on why this discrepancy is OK and why it shouldn’t be fixed by adding generics to the language.

                                                                1. 14

                                                                  I don’t personally have a horse in this race and don’t work with Go, but from a language-design perspective it does seem strange to limit user-developed code in such a way.

                                                                  Language design is all about limiting user defined code to reasonable subsets of what can be expressed. For a trivial example, why can’t I name my variable ‘int’? (In Myrddin, as a counterexample, var int : int is perfectly legal and well defined).

                                                                  For a less trivial example, relatively few languages guarantee tail recursion – this also limits user developed code, and requires programmers to use loops instead of tail recursion or continuation passing style.

                                                                  Adding generics adds a lot of corner cases to the type system, and increases the complexity of the language a good deal. I know. I implemented generics, type inference, and so on in Myrddin, and I’m sympathetic to leaving generics out (or, as you say, extremely limited) to put a cap on the complexity.

                                                                  1. 3

                                                                    I see only two legitimate reasons to limit a user’s capabilities:

                                                                    1. Removing the limitation would make the implementer’s life harder.
                                                                    2. Removing the limitation would allow the user to shoot themselves in the foot.

                                                                    Limiting tail recursion falls squarely in (1). There is no way that guaranteeing tail recursion would cause users to shoot themselves in the foot. Generics is another matter, but I strongly suspect it is more about (1) than it is about (2).

                                                                    Adding generics adds a lot of corner cases to the type system, and increases the complexity of the language a good deal.

                                                                    This particular type system, perhaps. This particular language, maybe. I don’t know Go, I’ll take your word for it. Thing is, if Go’s designers had the… common sense not to omit generics from their upcoming language, they would have made a slightly different language, with far fewer corner cases they will inevitably suffer now that they’re adding it after the fact.

                                                                    Besides, the complexity of a language is never a primary concern. The only complexity that matters is that of the programs written in that language. Now the complexity of a language does negatively impact the complexity of the programs that result from it, if only because language space is bigger. On the other hand, this complexity has the potential to pay for itself, and end up being a net win.

                                                                    Take C++ for instance. Every single feature we add to it increases the complexity of the language, to almost unbearable levels. I hate this language. Yet, some of its features definitely pay for themselves. Range for for instance, while it slightly complicates the language, makes programs that use it significantly cleaner (although only locally). That particular feature definitely pays for itself. (we could discuss other examples, but this one has the advantage of being uncontroversial.)

                                                                    As far as I can tell, generics tend to massively pay for themselves. Not only do they add flexibility in many cases, they often add type safety (not in C++, they don’t). See for instance this function:

                                                                    foo : (a -> b) -> [a] -> [b]
                                                                    

                                                                    This function has two arguments (where a and be are unknown types): a function from a to b, and a list of a. It returns a list of b. From this alone, there is a lot we can tell about this function. The core idea here is that the body of the function cannot rely on the contents of generic types. This severely constraints what it can do, including the bugs it can have.

                                                                    So, when we write let ys = foo f xs, here’s what we can expect before we even look at the source code:

                                                                    • Assuming f is of type a->b, then xs is a list of a, and the result ys is a list of b.
                                                                    • The elements of ys, if any, can only come from elements of xs.
                                                                      • And they must have gone through f.
                                                                      • Exactly once.
                                                                    • The function f itself does not affect the number or order of elements in the result ys
                                                                    • The elements of xs do not individually affect the number or order of elements in the result ys
                                                                    • The only thing that affects the number or order of elements in the result ys is the size of xs (and the code of foo, of course).

                                                                    This is quite unlike C++, or other template/monomorphisation approaches. Done right, generics have the opportunity to remove corner cases in practice. Any language designer deciding they’re not worth their while better have a damn good explanation. And in my opinion, the explanations offered for Go weren’t satisfactory.

                                                                    1. 4

                                                                      Complexity of a language is the primary concern!

                                                                      Languages are tools to express ideas, but expressiveness is a secondary concern, in the same way that the computer is the secondary audience. Humans are the primary audience of a computer program, and coherence is the primary concern to optimize for.

                                                                      Literary authors don’t generally invent new spoken languages because they’re dissatisfied with the expressive capability of their own. Artful literature is that which leverages the constraints of it’s language.

                                                                      1. 4

                                                                        Literary authors don’t generally invent new spoken languages because they’re dissatisfied with the expressive capability of their own. Artful literature is that which leverages the constraints of it’s language.

                                                                        Eh, I have to disagree here. Literary authors try to stretch and cross the boundaries the of their spoken languages all the time, specifically because they search ways to express things that where not yet expressed before. To give some uncontroversial examples, Shakespeare invented 1700 new words and Tolkien invented not one, but a couple of whole new languages.

                                                                        I am but a very low level amateur writer, but I can tell you: the struggle with the tool to express your ideas is as real with spoken languages as it is with programming languages. It is an approach from another direction, but the results from spoken languages turn out to be as imperfect as those from programming ones.

                                                                        1. 1

                                                                          I’d argue that constrained writing is more common, if nothing else than showing ones mastery of a shared language is more impressive than adding unknown elements.

                                                                          Tolkien’s Elvish languages, while impressively complete, are simply used as flavor to the main story. The entire narrative instead leans heavily on tropes and language patterns from older (proto-English) tales.

                                                                          1. 1

                                                                            Yes, you have a point. I mentioned Tolkien because that was the first writer that created a new language that I could come up with. But in the end, if you want to express an idea, then your audience must understand the language that you use, otherwise they will not get your message. So common language and tropes can help a lot.

                                                                            However, I think your mention of constrained writing is interesting. Because in a way, that Go does not have generics, is similar to the constraint that a sonnet must follow a particular scheme in form and content. It is perfectly possible to add generics to Go, the same way as it is very possible to slap another tercet at the end of a sonnet. Nothing is stopping you, really, Expect that then it would no longer be a sonnet. Is that a bad thing? I guess not. But still almost no-one does it.

                                                                            I’d say that the rules, or the constraints, are a form of communication too. If I read a sonnet, I know what to expect. If I read Go, I know what to expect. Because some things are ruled out, there can be more focus on what is expressed within the boundaries. As a reader you can still be amazed. And, the same as in Go, if what you want to express really does not fit in the rules of a sonnet, or if it is not worth the effort to try it, then you can use another form. Or another programming language.

                                                                          2. 1

                                                                            Your points don’t conflict with my points, and I agree with them.

                                                                          3. 2

                                                                            Can we agree that the goal of programming languages is to reduce costs?

                                                                            • Cost of writing the program.
                                                                            • Cost of errors that may occur.
                                                                            • Cost of correcting those errors.
                                                                            • Cost of modifying the program in the face of unanticipated new requirements.

                                                                            That kind of thing. Now we must ask what influences the costs. Now what about increased expressiveness?

                                                                            A more expressive language might be more complex (that’s bad), more error prone (that’s bad), and allow shorter programs (that’s good), or even clearer programs (that’s good). By only looking at the complexity of the language, you are ignoring many factors that often matter a whole lot more.

                                                                            Besides, that kind of reasoning quickly breaks down when you take it to its logical extreme. No one in their right mind would use the simplest language possible, which would be something like the Lambda Calculus, or even just the iota combinator. Good luck writing (or maintaining!) anything worth writing in those.

                                                                            Yes, generics makes a language more complex. No, that’s not a good enough argument. If it was, the best language would only use the iota combinator. And after working years in a number of languages (C, C++, OCaml, Ptython, Lua…), I can tell with high confidence that generics are worth their price several orders of magnitudes over.

                                                                            1. 2

                                                                              I agree with you that generics can be hugely net positive in the cost/benefit sense. But that’s a judgment that can only be made in the whole, taking into account the impact of the feature on the other dimensions of the language. And that’s true of all features.

                                                                              1. 1

                                                                                Just popping in here because I have minimal experience with go, but a decent amount of experience in languages with generics, and I’m wondering: if we set aside the implementation challenge, what are some examples of the “other dimensions” of the language which will be negatively impacted by adding generics? Are these unique to go, or general trade offs in languages with generics?

                                                                                To frame it in another way, maybe a naive take but I’ve been pretty surprised to see generics in go being rejected due to “complexity”. I agree that complexity ought to be weighed against utility but can we be a little more specific? Complexity of what specifically? In what way will writing, reading, compiling, running, or testing code become more complicated when my compiler supports generics. Is this complexity present even if my own code doesn’t use generics?

                                                                                And just a final comparison on language complexity. I remember when go was announced, the big ticket feature was its m:n threaded runtime and support for CSP-style programming. These runtimes aren’t trivial to implement, and certainly add “complexity” via segmented stacks. But the upside is the ability to ergonomically express certain kinds of computational processes that otherwise would require much more effort in a language without these primitives. Someone decided this tradeoff was worth it and I haven’t seen any popular backlash against it. This feature feels very analogous to generics in terms of tradeoffs which is why I’m so confused about the whole “complexity” take. And like, maybe another naive question, but wouldn’t generics be significantly less tricky to implement than m:n threads?

                                                                                1. 5

                                                                                  It isn’t just implementation complexity of generics itself. It’s also sure to increase the complexity of source code itself, particularly in libraries. Maybe you don’t use generics in your code, but surely some library you use will use generics. In languages that have generics, I routinely come across libraries that are more difficult to understand because of their use of generics.

                                                                                  The tricky part is that generics often provides some additional functionality that might not be plausible without it. This means the complexity isn’t just about generics itself, but rather, the designs and functionality encouraged by the very existence of generics. This also makes strict apples-to-apples comparisons difficult.

                                                                                  At the end of the day, when I come across a library with lots of type parameters and generic interfaces, that almost always translates directly into spending more time understanding the library before I can use it, even for simple use cases. That to me is ultimately what leads me to say that “generics increases complexity.”

                                                                                  1. 2

                                                                                    what are some examples of the “other dimensions” of the language which will be negatively impacted by adding generics?

                                                                                    From early golang blog posts I recall generics add substantial complexity to the garbage collector.

                                                                                    The team have always been open about their position (that generics are not an early priority, and they will only add them if they can find a design that doesn’t compromise the language in ways they care about). There have been (numerous proposals rejected)[https://github.com/golang/go/issues?page=3&q=generics++is%3Aclosed+label%3AProposal] for varied reasons.

                                                                                    Someone decided this tradeoff was worth it and I haven’t seen any popular backlash against it

                                                                                    There’s no backlash against features in new languages, because there’s nobody to do the backlash.

                                                                                    Go has already got a large community, and there’s no shortage of people who came to go because it was simple. For them, adding something complex to the language is frightening because they have invested substantial time in an ecosystem because of its simplicity. Time will tell whether those fears were well-founded.

                                                                              2. 1

                                                                                No, expressiveness is the only reason for languages to exist. As you say, humans are the primary audience. With enough brute force, any language can get any task done, but what we want is a language that aids the reader’s understanding. You do that by drawing attention to certain parts of the code and away from certain parts, so that the reader can follow the chain of logic that makes a given program or function tick, without getting distracted by irrelevant detail. A language that provides the range of tools to let an author achieve that kind of clarity is expressive.

                                                                                1. 2

                                                                                  I think we are using “expressive” differently. Which is fair, it’s not really a well-defined term. But for me, expressiveness is basically a measure of the surface area of the language, the features and dimensions it offers to users to express different ideas, idioms, patterns, etc. Importantly, it’s also proportional to the number of things that it’s users have to learn in order to be fluent, and most of the time actually exponentially proportional, as emergent behaviors between interacting features are often non-obvious. This is a major cost of expressiveness, which IMO is systemically underestimated by PLT folks.

                                                                              3. 3

                                                                                I implemented generics. You’re trying to convince me that it’s worth implementing generics. Why?

                                                                                Besides, the complexity of a language is never a primary concern.

                                                                                I disagree. I think implementation matters.

                                                                            2. 2

                                                                              That’s an intersting observation; thanks for sharing it.

                                                                              they just aren’t exposed to the end developer

                                                                              I think this supports my point better than I’m able to. Language design is just as much about what is hidden from developers as what is exposed. That generics are hidden from end users is something I greatly appreciate about Go. So when I refer to generics, I’m referring to generics used by every day developers.

                                                                              I’d be curious what your thoughts on why this discrepancy is OK and why it shouldn’t be fixed by adding generics to the language.

                                                                              In my opinion the greatest signal that Go doesn’t need generics is the wonderfully immense corpus of code we have from the last decade – all written without generics. Much of it written with delight by developers who chose Go over other langauges for it’s pleasant simplicity and dearth of features.

                                                                              That is not to say that some of us offasionally could have written less code if generics were available. Particularly developers writing library or framework code that would be used by other developers. Those developers absolutely would have been aided by generics. They would have written less code; their projects may have cost less to initially develop. But for every library/framework developer there are five, ten, twenty (I can’t pretend to know) end user application developers who never had the cognitive load of genericized types foisted on them. And I think that is an advantage worth forgoing generics. I don’t think I’m particularly smart. Generics make code less readable to me. They impose immense cognitive load when you’re a new developer to a project. I think there are a lot of people like me. After years of Java and Scala development, Go to me is an absolute delight with its absence of generics.

                                                                              1. 6

                                                                                In my opinion the greatest signal that Go doesn’t need generics is the wonderfully immense corpus of code we have from the last decade

                                                                                I don’t have a ready example, but I’ve read that the standard library itself conspicuously jumped through hoops because of the lack of generics. I see it as a very strong sign (that’s an understatement) that the language has a dire, pervasive, need for generics. Worse, it could have been noticed even before the language went public.

                                                                                If you had the misfortune of working with bright incompetent architects astronauts who used generics as an opportunity to make an overly generic behemoth “just in case” instead of solving the real problem they had in front of them, well… sorry. Yet, I would hesitate to accuse the language’s semantics for the failings of its community.

                                                                            3. 7

                                                                              I don’t remember exact details, it was super long ago, but I once wanted to write an editor centered around using a nontrivial data structure (“table chain” or “string table” or whatever was the name). Also the editor had some display aspect structures (~cells of terminal). At some point I needed to be able to experiment with rapidly changing the type of the object stored both in the “cells” and “chains” of the editor (e.g. to see if adding styles etc. per character might make sense from architectural point of view). If you squint, those are both kind of “containers” for characters (haskeller would maybe say monads? dunno). I had to basically either manually change all the places where the original “character” type was used, or fall back to interface{} losing all benefits of static typing that I really needed. Notably this was long before type aliases which would have possibly allowed me to push a bit further, though it’s hard for me to recall now. But the pain and impossibility of rapid prototyping at this point was so big I didn’t see it possible to continue working on the project and abandoned it. Not sure if immediately then or some time later I realized that this is the rare moment where generics would be valuable in letting me explore designs I cannot realistically explore now.

                                                                              In other words, what others say: nontrivial/special-purpose “containers”. You don’t need them until you do.

                                                                              Until then I fully subscribed to “don’t need generics in Go” view. Since then I’m in “don’t need generics in Go; except when do”. And I had one more hobby project afterwards that I abandoned for exactly the same reason.

                                                                              And I am fearful and do lament that once they are introduced, we’ll probably see everyone around abusing them for a lot of unnecessary purposes, and that this will be a major change to the taste of the language. That makes me respect the fact that the Team are taking their time. But I do miss them since, and if the Team grudgingly accepts the current draft as passabke, this is such a high bar that it makes me extremely excited for what’s to come, that it will be one of the best ways how this compromise can be introduced. Given that most decisions in languages are some compromises.

                                                                              1. 6

                                                                                Yeah, Go is very much not a language for rapid prototyping. It expects you to come to the table with a design already in mind.

                                                                                1. 2

                                                                                  Umm, what? Honestly not sure if you’re meaning this or being sarcastic (and if yes, don’t see the point). I prototyped quite a lot of things in Go no problem. I actually hold it as one of the preferred languages for rapid prototyping if I expect I might want to keep the result.

                                                                                  1. 5

                                                                                    I’m being totally serious. Go is chock full of stuff that makes typical rapid prototyping extremely difficult. A lack of a REPL. Compiler errors on unused variables. Verbose error handling. And so on. All of these things combine to make it harder to “design on the fly”, so to speak, which is what rapid prototyping frequently means.

                                                                                    With that said, Go works great for prototyping in the “tracer bullet” methodology. That’s where your prototype is a complete and production quality thing, and the iteration happens at a higher level.

                                                                                    1. 1

                                                                                      Got it, thanks! This made me realize that I reach for different languages in different cases for prototyping. Not yet really sure why now. But I feel that sometimes the dynamic types of Lua make me explore faster, whereas sometimes static types of Go or Nim make me explore faster.

                                                                              2. 4

                                                                                I’m going to assume you’re arguing in good faith here, but as a lurker on the go-nuts mailing list, I’ve seen too many people say “I don’t think generics are necessary” or “I haven’t heard a good enough reason for the complexity of generics”. It’s worth pointing out the Go team has collected feedback. Ian Lance Taylor (one of the current proposal’s main authors) spends a large portion of time responding to emails/questions/objections.

                                                                                I read a comment from someone who was on the Kubernetes team that part of the complexity of the API (my understanding is they have a pseudo-type system inside) is based on the fact that proto-Kubernetes was written in Java and the differences between the type systems compounded with a lack of generics created lots of complexity. (NOTE I don’t remember who said this, and I am just some rando on the net, but that sounds like a decent example of the argument for generics. Yes, you can redesign everything to be more idiomatic, but sometimes there is a compelling need to do things like transfer a code base to a different language)

                                                                                1. 1

                                                                                  Ouch, I was wondering why the Kubernetes API looks so painfully like Java and not like Go. TIL that’s because it was literally a dumb translation from Java. :/ As much as I’m a pro-generics-in-Go guy, I’m afraid that’s a bad case for an argument, as I strongly believe it is a really awful and unidiomatic API from Go perspective. Thus I by default suspect that if its authors had generics at their disposal, they’d still write it Java-style and not Go-style, and probably still complain that Go generics are different from Java generics (and generally that Go is not Java).

                                                                                2. 3

                                                                                  I don’t know if the author’s example was a good one to demonstrate the value of generics, but a cursory look at the diff would suggest he didn’t really gain anything from it. I always thought a huge benefit of generics was it saved you 10s or even 100s of lines of code because you could write one generic function and have it work for multiple types. He ended up adding lines. Granted, the author said it was mostly from tests, but still there doesn’t seem to be any dramatic savings here.

                                                                                  1. 3

                                                                                    I recommend taking more than a cursory look. The value here is very much in the new library interface. In effect, the package provides generalize channels, and before the change, that generalization meant both a complicated interface, and losing compiler-enforced type safety.

                                                                                1. 7

                                                                                  this is remarkable!

                                                                                  for the sake of my understanding, what are the other popular options for installing a drop-in c/c++ cross compiler? A long time ago, I used Sourcery Codebench, but I think that was a paid product

                                                                                  1. 7

                                                                                    Clang is a cross-compiler out of the box, you just need headers and libraries for the target. Assembling a sysroot for a Linux or BSD system is pretty trivial, just copy /usr/{local}/include and /usr/{local}/lib and point clang at it. Just pass a --sysroot={path-to-the-sysroot} and -target {target triple of the target} and you’ve got cross compilation. Of course, if you want any other libraries then you’ll also need to install them. Fortunately, most *NIX packaging systems are just tar or cpio archives, so you can just extract the ones you want in your sysroot.

                                                                                    It’s much harder for the Mac. The license for the Apple headers, linker files, and everything else that you need, explicitly prohibit this kind of use. I couldn’t see anything in the Zig documentation that explains how they get around this. Hopefully they’re not just violating Apple’s license agreement…

                                                                                    1. 3

                                                                                      Zig bundles Darwin’s libc, which is licensed under APSL 2.0 (see: https://opensource.apple.com/source/Libc/Libc-1044.1.2/APPLE_LICENSE.auto.html, for example).

                                                                                      APSL 2.0 is both FSF and OSI approved (see https://en.wikipedia.org/wiki/Apple_Public_Source_License), which makes me doubt that this statement is correct:

                                                                                      The license for the Apple headers, linker files, and everything else that you need, explicitly prohibit this kind of use.

                                                                                      That said, if you have more insight, I’m definitely interested in learning more.

                                                                                      1. 1

                                                                                        I remember some discussion about these topics on Guix mailing lists, arguing convincingly why Guix/Darwin isn’t feasible for licensing issues. Might have been this: https://lists.nongnu.org/archive/html/guix-devel/2017-10/msg00216.html

                                                                                      2. 1

                                                                                        The license for the Apple headers, linker files, and everything else that you need, explicitly prohibit this kind of use.

                                                                                        Can’t we doubt the legal validity of such prohibition? Copyright often doesn’t apply where it would otherwise prevent interoperability. That’s why we have third party printer cartridges, for instance.

                                                                                        1. 2

                                                                                          No, interoperability is an affirmative defence against copyright infringement but it’s up to a court to decide whether it applies.

                                                                                      3. 4

                                                                                        When writing the blog post I googled a bit about cgo specifically and the only seemingly general solution for Go I found was xgo (https://github.com/karalabe/xgo).

                                                                                        1. 2

                                                                                          This version of xgo does not seem to be maintained anymore, I think most xgo users now use https://github.com/techknowlogick/xgo

                                                                                          I use it myself and albeit very the tool is very heavy, it works pretty reliable and does what is advertised.

                                                                                          1. 2

                                                                                            Thanks for mentioning this @m90. I’ve been maintaining my fork for a while, and just last night automated creating PRs for new versions of golang when detected to reduce time to creation even more.

                                                                                        2. 3

                                                                                          https://github.com/pololu/nixcrpkgs will let you write nix expressions that will be reproducibly cross-compiled, but you also need to learn nix to use it. The initial setup and the learning curve are a lot more demanding that zig cc and zig c++.

                                                                                          1. 3

                                                                                            Clang IIRC comes with all triplets (that specify the target, like powerpc-gnu-linux or whatever) enabled OOTB. You can then just specify what triplet you want to build for.

                                                                                            1. 2

                                                                                              But it does not include the typical build environment of the target platform. You still need to provide that. Zig seems to bundle a libc for each target.

                                                                                              1. 2

                                                                                                I have to wonder how viable this will be when your targets become more broad than Windows/Linux/Mac…

                                                                                                1. 6

                                                                                                  I think the tier system provides some answers.

                                                                                                  1. 3

                                                                                                    One of the points there is that libc is available when cross-compiling.

                                                                                                    On *NIX platforms, there are a bunch of things that are statically linked into every executable that provide the things that you need for things like getting to main. These used to be problematic for anything other than GCC to use because the GCC exemption to GPLv2 only allowed you to ignore the GPL if the thing that inserted them into your program was GCC. In GCC 4.3 and later, the GPLv3 exemption extended this to any ‘eligible compilation process’, which allows them to be used by other compilers / linkers. I believe most *BSD systems now use code from NetBSD (which rewrote a lot of the CSU stuff) and LLVM’s compiler-rt. All of these are permissively licensed.

                                                                                                    If you’re dynamically linking, you don’t actually need the libc binary, you just need something that has the same symbols. Apple’s ld64 supports a text file format here so that Apple doesn’t have to ship all of the .dylib files for every version of macOS and iOS in their SDKs. On ELF platforms, you can do a trick where you strip everything except the dynamic symbol tables from the .so files: the linker will still consume them and produce a binary that works if you put it on a filesystem with the original .so.

                                                                                                    As far as I am aware, macOS does not support static linking for libc. They don’t ship a libc.a and their libc.dylib links against libSystem.dylib, which is the public system call interface (and does change between minor revisions, which broke very single Go program, because Go ignored the rules). As I understand correctly, a bunch of the files that you need to link a macOS or iOS program have a license that says that you may only use them on a Mac. This is why the Visual Studio Mac target needs a Mac connected on the network to remotely access and compile on, rather than cross-compiling on a Windows host.

                                                                                                    I understand technically how to build a cross-compile C/C++ toolchain: I’ve done it many times before. The thing I struggle with on Zig is how they do so without violating a particularly litigious company’s license terms.

                                                                                                    1. 2

                                                                                                      This elucidates a lot of my concerns better than I could have. I have a lot of reservations about the static linking mindset people get themselves into with newer languages.

                                                                                                      To be specific on the issue you bring up: Most systems that aren’t Linux either heavily discourage static libc or ban it - and their libcs are consistent unlike Linux’s, so there’s not much point in static libc. libc as an import library that links to the real one makes a lot of sense there.

                                                                                          1. 7

                                                                                            “How would you publicly and/or privately act on this present discussion?” would be valuable. I wish the best of success in assembling the new team; once that’s through, I’d love some kind of statement of the mod team on their stance with respect to the major conflicts showing here.

                                                                                            1. 18

                                                                                              are there any explicit diversity / equity / inclusion goals here?

                                                                                              1. 43

                                                                                                I hope gender, skin color, sexual preference, etc have absolutely no bearing on who is/isn’t a mod here.

                                                                                                1. 28

                                                                                                  I believe the only strong selection bias is towards masochism.

                                                                                                  1. 14

                                                                                                    Are you not fully aware by this point of the bias that occurs when inclusion isn’t a priority? Being “neutral” in this way generally ends up creating groups with homogeneous gender, skin color and sexual preference.

                                                                                                    1. 24

                                                                                                      how do you even know these things here? Most people have a nick and an auto-generated avatar picture. Nowhere have we ever given the site any information about age, race, color whatever. I could be a sentient goldfish and it should not matter really

                                                                                                      1. 12

                                                                                                        This is hyperbole. As long as the moderator is good it doesn’t matter who they are.

                                                                                                    2. 7

                                                                                                      We don’t have a demographic view of the site to compare against and have generally avoided collecting personal information, so I don’t have a goal along these lines. Looking at my inbox and following some homepage links I can see that this process will met the Rooney rule.

                                                                                                      1. 15

                                                                                                        whoa, someone down voted me for trolling because I asked about DEI criteria? in 2021?

                                                                                                        This is, um, not making a good first impression on this new lobster.

                                                                                                        1. 9

                                                                                                          This is pretty typical, sadly.

                                                                                                          1. 8

                                                                                                            Asking about goals didn’t seem like a troll to me. That said, people have certainly used that topic as bait here and elsewhere before.

                                                                                                            Acting surprised and complaining about downvotes after seeing the answers other commenters gave you seems quite a bit more troll-y.

                                                                                                            1. 5

                                                                                                              If there’s an audience around to make that topic work as troll bait, well, there’s our problem.

                                                                                                              1. 1

                                                                                                                There are different kinds of trolls. What they have in common is that they aim to derail discussions. Leaving aside meta discussions like this one, in almost every discussion on this site, business, hiring practices and the like are explicitly off-topic.

                                                                                                                But there are some people who especially like to discuss those topics anyway and will cheerfully derail a discussion about computing with just a little prompting like that. So one good way to derail a discussion is to talk about some aspect of hiring practices or business dealings.

                                                                                                                Discussing US partisan politics would be similarly effective, but that tends to get shut down quicker, so the trolls try to be a bit more subtle.

                                                                                                                The fact that people are sometimes too easily nudged off topic seems to be a relatively minor problem. But it probably makes people quicker to flag something like OP’s question even in a thread where it’s more topical. Not sure I’d say “well, there’s our problem” about that :)

                                                                                                            2. 6

                                                                                                              You touched a nerve. I became the fifth-most-flagged contributor recently under similar circumstances; this single thread did it. It is difficult for folks to look in the mirror, and anything which requires enough reflection will naturally gather downvotes.

                                                                                                              Don’t worry about it. Focus on being the best contributor that you can be, and you’ll do great.

                                                                                                          1. 4

                                                                                                            Has anyone here played with Idris 2 recently? How does it compare in terms of “readiness” with Idris 1?

                                                                                                            1. 25

                                                                                                              I’m biased here, since I’m writing the thing, but I would strongly recommend using Idris 2 over Idris 1 now.

                                                                                                              The tools aren’t as polished, but that’s coming, and it’s more than made up for by the fact the the tools we do have (especially interactive editing) actually scale beyond small programs - I’m actively using the interactive tools for editing Idris 2 itself. The code it generates is much faster (targetting Chez Scheme, which is well suited to the job) and erasure is predictable, so you actually know what’s going to run at run-time rather than trusting inference to do what you want.

                                                                                                              Also, it’s much more robust. Not that there aren’t lots of issues to deal with, of course - that is always the way - but given that the implementation takes advantage of Idris’ type system, the issues that do arise tend to be more about high language design questions and presentational things than anything fundamental.

                                                                                                            1. 6

                                                                                                              Alternatively, Nix docker-tools, which produces images more minimal than Docker usually does.

                                                                                                              This is because BuildKit can build multiple stages in parallel.

                                                                                                              Sounds to me like Docker’s finally catching up with what the Nix daemon has been able to do for years. And “catching up” may be generous here. Oof.

                                                                                                              1. 2

                                                                                                                Does Nix provide a way to cache intermediate build artifacts between builds? (Basically, like .o files caching, but esp. for me in Go and Nim.) I’m a huge fan of Nix, learning & using it for some personal purposes, and even doing some local advocacy, but I haven’t found a way to do that in particular, whereas buildkit does have it. In fact I think it would require some tricks in Nix given that it resets date to timestamp 0 on all files in Nix store. I’m aware of nix-shell, though I don’t have much experience with it yet, but still I think it wouldn’t make much sense to try and use that as part of a CI pipeline (for .o reuse), as it would kinda defeat one of the main advantages of Nix (hermeticity of builds)? I’d be really interested in finding a way to get that reuse, as it would make Nix even more useful to me, speeding up some operations.

                                                                                                                edit: Hm, I’m starting to think it could be doable for hash-based build systems (e.g. Go) with some build hook (for saving the intermediate build artifacts), but it might require a virtual/FUSE filesystem for fetching the intermediate build artifacts when queried by go build.

                                                                                                                1. 1

                                                                                                                  It’s possible to use ccache (and probably sccache) with Nix, but I’m not sure how much that would help with Nim and it wouldn’t work at all with Go.

                                                                                                                  1. 2

                                                                                                                    Can you show me a nix expression doing that? Does it stay self-sufficient enough to be able to be included in nixpkgs and transparently used to build parts of nixpkgs, or does such use of ccache in a nix expression require an outside service (i.e. some persistence outside the nixpkgs “build sandbox”)? I’m really interested in understanding what’s the mechanism behind what you’re suggesting!

                                                                                                                    1. 1

                                                                                                                      Can you show me a nix expression doing that?

                                                                                                                      To be honest I’m quite new to Nix and haven’t gotten it working myself yet :)

                                                                                                                      I think that the easiest way to use ccache with Nix is to replace a package’s stdenv, the Nix ccache package comes with an easy way of doing that to packages in your own overlay. A problem is that changing the stdenv would change all of your build hashes.

                                                                                                                      You can also turn off sandboxing and set the ccache environment variables.

                                                                                                                      or does such use of ccache in a nix expression require an outside service (i.e. some persistence outside the nixpkgs “build sandbox”)?

                                                                                                                      Yeah, ccache requires a directory that you keep intact between runs, and sccache uses a remote daemon.

                                                                                                                  2. 1

                                                                                                                    I’m currently fighting with Docker go builds myself. Could you explain how BuildKit helps with caching build artifacts (or provide a link)? With a traditional multistage Dockerfile, I can’t seem to find a reasonable way to share go’s build cache across changes to the source code.

                                                                                                                    1. 2

                                                                                                                      You can do something like:

                                                                                                                      RUN --mount=type=cache,id=go,target=/root/.cache-/go-build go build
                                                                                                                      

                                                                                                                      or something like that. See https://hub.docker.com/r/docker/dockerfile/

                                                                                                                      The caveat is that as I understand it this won’t survive across multiple VM rebuilds, so depending on CI setup won’t help there.

                                                                                                                1. 1

                                                                                                                  I’d be curious about the results of this poll with a follow-up question that either asks how confident they are in their answer, or why they gave the answer they gave.

                                                                                                                  1. 12

                                                                                                                    There’s a lot about this article I like (and the site - powered by solar power and sometimes offline/ cute!) but the xenophobia towards Chinese people is not acceptable.

                                                                                                                    1. 30

                                                                                                                      This seems like an overreaction to me. There’s exactly two comments about China/Chinese people:

                                                                                                                      The Chinese don’t have a reputation for building quality products

                                                                                                                      and

                                                                                                                      The Chinese may not have a reputation for building quality products, but they sure know how to fix things.

                                                                                                                      But:

                                                                                                                      • statements exhibiting prejudice != xenophobia.
                                                                                                                      • reporting on a reputation is just stating a fact: this is indeed the reputation Chinese (consumer?) products have. You can’t infer the author thinks the reputation is accurate, especially given how they acted (they bought the Chinese product anyway).
                                                                                                                      • even if you believe the author does think the reputation is accurate: you don’t know how many experiences they have with Chinese products. Their belief in the accuracy of the reputation may be supported by their own experiences
                                                                                                                      • A jab against the quality of products is not a jab against the people producing the product. Even if the author phrases it using the unfortunately common conflation of a country and its people.
                                                                                                                      • it’s human and useful to generalize: a generalization isn’t necessarily problematic, unless the conclusions are extended too far. They aren’t suggesting you don’t buy Chinese products or only let things be repaired by a Chinese person, are they?
                                                                                                                      1. 17

                                                                                                                        I agree with the parent, this also rubbed me the wrong way. Even just having “the Chinese” in your vocabulary is too much IMO, no matter whether it displays xenophobia or just unreflected prejudice.

                                                                                                                        1. 9

                                                                                                                          The author’s native language is Dutch, in which it’s still idiomatic to say ‘the Chinese’ to mean ‘the Chinese people’. It used to be idiomatic in English as well, of course, but it has gathered negative connotations in the past few decades. That’s something his proofreader should’ve picked up.

                                                                                                                          As regard the statements about the quality of Chinese electronics and workmanship, yes, I could do without those as well.

                                                                                                                          1. 2

                                                                                                                            The author’s native language is Dutch, in which it’s still idiomatic to say ‘the Chinese’ to mean ‘the Chinese people’. It used to be idiomatic in English as well, of course, but it has gathered negative connotations in the past few decades

                                                                                                                            Im curious how else you would say it? Would you attribute it to the country and not the people? i.e. China (or Chinese manufacturers) don’t have a reputation for quality?

                                                                                                                            Is the issue attributing it to a people as a whole?

                                                                                                                            Not trying to be argumentative, just trying to understand the issue.

                                                                                                                            1. 3

                                                                                                                              I just wouldn’t make unsubstantiated claims about an entire country.

                                                                                                                        2. 17

                                                                                                                          The Chinese don’t have a reputation for building quality products

                                                                                                                          The funny thing about this one is that not only does the person saying it come off as prejudiced, they’re also out of touch.

                                                                                                                          Almost any electronic device made today is built in China, with components also made in China. From high end Apple products down to bottom of the barrel knock offs. Just being made in Chna doesn’t say much about quality any more.

                                                                                                                          1. 11

                                                                                                                            There is a condescending tone at play though, which generalises Chinese people (e.g. the guy that repaired his laptop) to members of a group and refuses to treat them as individuals.

                                                                                                                            I don’t take issue with the literal meaning of those sentences, but given their tone and cultural context, I think it’s rather insensitive and unhelpful.

                                                                                                                            1. 2

                                                                                                                              The second occurrence was referring to a repair shop they sent it to. So unless that shop was in China (I don’t think they say so, so I assume not) they’re referring to the ethnicity of the shop worker.

                                                                                                                            2. 9

                                                                                                                              MacBooks are made in China, so if you can agree they’re at least on par with X60 build quality, the point falls apart. Perhaps you could say Lenovo chose a subpar Chinese supplier, but that hardly indicts the whole country.

                                                                                                                              I enjoyed my X200s until hardware failure & blue screens, and my old X61s is in a closet (some sort of display issue). Eventually, these machines wear out. I find MacBooks at least as well-designed/-built, and the M1 ain’t too shabby, so while I miss the 12” ThinkPads I’ll be fine.

                                                                                                                            1. 2

                                                                                                                              but this has been tested against nDPI and a commercial DPI engine developed by Palo Alto Networks, both of which detected TOR traffic encapsulated by Rosen as ordinary HTTPS

                                                                                                                              That might well just be a momentary observation though. It seems likely that such engines just need a small update to recognize TOR/Rosen.

                                                                                                                              1. 3

                                                                                                                                The true test will be if/when censors take note. The main fingerprint that can pinpoint a Rosen client is its strange timing pattern and atypical bandwidth characteristics. These can be tweaked if needed.

                                                                                                                                This is how researchers managed to detect meek, for example. It polls for data immediately and then decays the delay interval by 1.5x if nothing happens. Researchers fed this data to a machine learning model. However from what I found, it doesn’t look like real world censors today use techniques this advanced in order to detect circumvention tools.

                                                                                                                              1. 21

                                                                                                                                This could be one of those hard cases that I talked about recently. This is mostly critiquing his programming, but then there’s notes about his business work that he’s now more famous for, and the business stuff is off-topic here. I’m not removing this because it’s mostly programming. Please help maintain the topicality of the site by not diving into his business and politics. (And reminder: anyone is welcome to help work through the above cases to figure out where to draw the line and how to express it. Those comments I just linked are my current thinking as I slowly work towards getting more of this more explicitly into /about.)

                                                                                                                                1. 30

                                                                                                                                  I enjoy bashing PG and startupcanistan as much as anyone, but this critique was heavy on “PG is an ossified hasbeen reactionary” and light on good critiques of Arc.

                                                                                                                                  An article about why Arc has deficiencies and what we can learn from it is one thing; character attacks in the guise of technical critique are another.

                                                                                                                                  I am as sure the real damages and harm PG has done are nontechnical as I am sure this is offtopic.

                                                                                                                                  1. 24

                                                                                                                                    More succinctly: we wouldn’t celebrate an article attacking Larry Wall or Richatd Stallman instead of Perl or Emacs.

                                                                                                                                    0r at least, I would hope we wouldn’t.

                                                                                                                                    1. 10

                                                                                                                                      I don’t have as much faith in the ability of the lobsters commentariat (and moderation team) to fairly judge what content is too political to be on-topic as you do. I would say that merely using the word “reactionary” in a pejorative way makes this article far more political than, say, anything I’ve ever posted here about Urbit that was flagged as off topic or trolling.

                                                                                                                                      1. 3

                                                                                                                                        Just to note that the bar for discussing an article here shouldn’t be that it’s worthy of celebration. What’s being discussed is whether this is on-topic at all.

                                                                                                                                        1. 9

                                                                                                                                          Consider the case of an article about, I don’t know, old IBM punchcards. Perfectly good information. Additionally, the author goes into Holocaust ramblings. How much other stuff are you willing to put up with?

                                                                                                                                          The exploit being used in this article is “mix nontechnical political content, e.g. character assasination, in with sufficient technical content, e.g. language design”.

                                                                                                                                          The article itself could’ve been written purely as a critique of Arc, with a passing reference to its designer, but that clearly isn’t why it was written.

                                                                                                                                          1. 8

                                                                                                                                            This isn’t even close to character assassination. It gives due praise but delves into a serious critique of character or maybe more accurately of method and intent. That was the point of the article. The technical content isn’t an excuse for the political content, it’s an illustrative example. The fact that the article isn’t a good fit for Lobsters shouldn’t matter to the author one bit.

                                                                                                                                            1. 9

                                                                                                                                              It gives due praise but delves into a serious critique of character or maybe more accurately of method and intent. That was the point of the article. The technical content isn’t an excuse for the political content, it’s an illustrative example.

                                                                                                                                              Thank you for making my point!

                                                                                                                                              Lobsters isn’t a site for character critiques and other drama gussied up with supporting technical details.

                                                                                                                                              1. 6

                                                                                                                                                Well, in theory a lot of people take technical advice from his essays on programming languages, language design, etc. If someone believes that’s a bad idea, it is 100% fair game and technical content to make that argument. Not long ago there was a piece that critiqued taking technical advice from Bob Martin by pointing out problems with Clean Code, for example.

                                                                                                                                        2. 1

                                                                                                                                          More succinctly: (make-my-point)

                                                                                                                                        3. 6

                                                                                                                                          I disagree, it is fairly technical and on point with Arc.

                                                                                                                                          Speaking as someone who actually wrote a program in Arc when it was released.

                                                                                                                                        4. 24

                                                                                                                                          Is this a reasonable summary of the article?

                                                                                                                                          • PG’s writing has taken a reactionary turn
                                                                                                                                          • Brevity in language design is a flawed and unrigorous notion. He’s using his intuition, which has not held up to reality
                                                                                                                                          • This is evidence that he uses his intuition everywhere; his opinions about politics shouldn’t be taken seriously.

                                                                                                                                          It’s a fair enough set of observations, although I’m not sure the argument is air tight. It’s also a very roundabout way of refuting political arguments… I’d rather just read a direct refutation of the politics (on a different site)

                                                                                                                                          1. 16

                                                                                                                                            Yes, the politics mentioned in the introduction felt out of place. The rest of the article was well-written and dispassionately argued, but I couldn’t help feeling the whole piece was motivated by political disagreements with Graham (epitomized by the coinbase tweet), and that diminished its impact for me.

                                                                                                                                            1. 8

                                                                                                                                              I don’t think it’s as clear as it could be, but I read the article as starting from the assumption that Graham’s recent political and social writing is poor, and then asking whether the earlier more technical writing is similarly flawed.

                                                                                                                                              If the argument went the way you said, it would be pretty bad. This is why I think talking about logical fallacies is less valuable than many people think. It’s usually pretty easy to tell if a precisely stated argument is fallacious. What’s harder is reconstructing arguments in the wild and making them precise.

                                                                                                                                              1. 13

                                                                                                                                                Yeah, if you want PG criticism, just go straight for Dabblers and Blowhards. It’s funny and honest about what it’s doing.

                                                                                                                                                https://idlewords.com/2005/04/dabblers_and_blowhards.htm

                                                                                                                                                This article spends a lot of words saying something that could be said a lot more directly. I’m not really a fan of the faux somber/thoughtful tones.

                                                                                                                                                (FWIW I think PG’s latest articles have huge holes, with an effect that’s possibly indistinguishable of that of willfully creating confusion. But it’s also good to apply the principle of charity, and avoid personal attacks.)

                                                                                                                                              2. 5

                                                                                                                                                You’ve removed an implication that libraries matter as much as the base language, some negative remarks on Paul Graham’s work as a language designer, and some positive remarks on Paul Graham’s overall effectiveness as a developer, technical writer, and marketer.

                                                                                                                                                But yes, the article seems fairly well summarized by its “This is all to say that Paul Graham is an effective marketer and practitioner, but a profoundly unserious public intellectual (…)”.

                                                                                                                                              3. 12

                                                                                                                                                I’m not removing this because it’s mostly programming.

                                                                                                                                                The programming that is mentioned is there to make a case against a person and extend it to a broader point about people. I would’ve made the call the other way.

                                                                                                                                                1. 17

                                                                                                                                                  There’s a lot of interesting insight here into how to do language design (and how not to do it). I’m glad it stayed up.

                                                                                                                                                2. 8

                                                                                                                                                  This is is a tricky one, yeah. It feels like there’s really three things going on in this article:

                                                                                                                                                  • The writer is bashing Paul Graham
                                                                                                                                                  • The writer is making it about Paul Graham’s political/social writings
                                                                                                                                                  • The writer is supporting those by talking about Paul Graham’s history in the technology field

                                                                                                                                                  When looked at that way, I’d lean slightly towards it being offtopic. If one wanted to write something about the design of Arc and the history and origin of design mistakes and the personality of the person that resulted in those mistakes, one could re-use the same arguments in this article and do so. I think one would come up with a very different article if so. So it’s not about the tech or the intersection of humanity and their artifacts, it’s about Paul Graham and their opinions.

                                                                                                                                                  1. 6

                                                                                                                                                    I’m glad this article about PG made it to lobsters, otherwise I wouldn’t have seen it. I’ve had a similar journey with PG’s writings as the author, going so far as to purchase Hackers and Painters when I was younger and thought programming made me special. I enjoyed learning a bit more about arc than I would have had this article been moderated off the site.

                                                                                                                                                    1. 5

                                                                                                                                                      Thanks for your hard work @pushcx! Moderation of topics is what makes lobsters great!

                                                                                                                                                      1. 4

                                                                                                                                                        My impression on this meta-topic of to what extent lobste.rs should have posts that touch to some extent on non-technical questions: We’ve arrived at a point where there’s a group of readers who will indiscriminately flag anything off-topic that falls into this category, and another group of readers who will indiscriminately upvote anything of this category that makes it through.

                                                                                                                                                        I’d suggest that it might be better for the health of the community to be a bit more permissive in terms of what political/social/cultural tech-adjacent stories are allowed, and to rather aim to let those that don’t want to see those posts here filter them using the tagging system. (But I’m sure that’s been suggested before.)

                                                                                                                                                      1. 3

                                                                                                                                                        Would have been nice to have this differentiated from Prolog, and/or to see more concretely how this might be used in static programming analysis. The wikipedia article helps.

                                                                                                                                                        1. 34

                                                                                                                                                          How common is it to find git stash scary? I can’t recall hearing that from anyone I’ve worked with. The manpage has a clear explanation of what it’s for and how it works, with multiple examples covering common use cases.

                                                                                                                                                          1. 19

                                                                                                                                                            I think stash is an odd corner of Git.

                                                                                                                                                            • It doesn’t modify the commit graph. It’s really easy to stash stuff and forget about it. I’ve definitely rewritten code because I forgot it was stashed.
                                                                                                                                                            • It’s an optional part of your workflow. I use git stash but I know plenty of people who have used git for years who don’t touch it. This means it’s not mentioned in “Intro to Git”-type resources.
                                                                                                                                                            • git stash drop is one of the few ways to lose your work. I’ve lost more work stash drop than to reset --hard

                                                                                                                                                            I really like git stash now, but I have learned to avoid keeping important code stashed for very long.

                                                                                                                                                            1. 8

                                                                                                                                                              Yeah, my stash tends to grow with dead code-paths and I have to clean it out every so often. IMO it’s particularly useful as a method to move uncommitted changes to a different a branch. The syntax of stash save will never not confuse me though, it’s so unergonomic.

                                                                                                                                                              1. 2

                                                                                                                                                                Hmm, in what sense does it not modify the commit graph? It seems like it does to me

                                                                                                                                                                git init
                                                                                                                                                                echo hello > a
                                                                                                                                                                git add a && git commit -m "Added a"
                                                                                                                                                                echo goodbye > a
                                                                                                                                                                git stash
                                                                                                                                                                git log --decorate --all --graph --pretty=format:"%C(auto)%h %<(7,trunc)%C(auto)%ae%Creset%C(auto)%d %s [%ar]%Creset"
                                                                                                                                                                *   232e380 tom-g.. (refs/stash) WIP on master: 1a86647 Added a [2 seconds ago]
                                                                                                                                                                |\
                                                                                                                                                                | * ba5c812 tom-g.. index on master: 1a86647 Added a [2 seconds ago]
                                                                                                                                                                |/
                                                                                                                                                                * 1a86647 tom-g.. (HEAD -> master) Added a [10 seconds ago]
                                                                                                                                                                
                                                                                                                                                              2. 8

                                                                                                                                                                I don’t find it scary as such, but feel it’s messy and tends to lose me work. Typically, by running into conflicts that I can’t resolve with git stash pop (possibly by accidentally popping the wrong change). I expect there might be a way to recover, but that’s where it typically ends and I start over from scratch.

                                                                                                                                                                1. 2

                                                                                                                                                                  Makes sense. Popping the wrong change by mistake and thus losing it would hurt. I almost always use git stash apply instead of git stash pop, though obviously sticking with apply has the side effect of the stash list growing over time.

                                                                                                                                                                2. 3

                                                                                                                                                                  How many here learned git with a GUI? I use git stash all the time without thinking about it but I remember avoiding it for a couple years after I somehow shot myself in the foot with it using a GUI. I only use the cli and scripting at the moment.

                                                                                                                                                                  1. 2

                                                                                                                                                                    I wouldn’t call it scary, however it’s common for guides to caution against it, and advocate replacing it with temporary commits.

                                                                                                                                                                    The potential for merge conflicts and subsequently losing work when doing git stash pop are the one part I might call scary.

                                                                                                                                                                    1. 2

                                                                                                                                                                      I find stash scary because if you make a mistake with it you cannot fix it with the reflog. I use it very gingerly and only when I know my editor is also making backups.