1. 33

    I’ve run into this mentality myself a couple of times, people claim in 2018 that they can write safe C/C++, it’s only those other people that can’t.

    1. 7

      I would claim that 2018 is the best time (yet) for writing memory-safe C++. We have tooling and paradigms now where we don’t need to rely on raw pointers and can make use of things like shared_ptr and unique_ptr. We can utilize the vector class which gives us OOB write protection (via the vector::at method) . We can even look to the design decisions made in Rust and see they draw their roots from modern C++ (think RIAA). All the protections of Rust already exist in C++, and are distinct enough that tooling can catch when developers don’t use them.

      1. 20

        I agree about it being best time to write safer C++. Ill add C, too, given all the analyzers. Empirical evidence Gaynor references show vast majority of C/C++ code fails to be safe anyway. So a safe-by-default option is better to use with unsafety turned on selectively when necessary.

        Also, the key aspects of Rust’s memory safety come from Cyclone language. It was a safer C with some temporal safety among other things. Clay was another which had linear and singleton types.

        1. 15

          All the protections of Rust already exist in C++

          Unless you are claiming C++ has a way to ensure data race freedom, this does not seem true.

          1. 10

            Smart pointers have been around for more than a decade (in Boost before they got into std::), and STL has been around for ages of course. From the memory safety perspective, the tools have been around for a long time. Perhaps the issue is that none of these things are mandatory.

            1. 4

              Halfway there: C++ was built on unsafe-by-default foundation, C, with good protections added that also aren’t mandatory. It evolved from there in a lot of ways I didnt keep up with. I just know that plus its parsing/semantic headaches gave C++ a brutal start if goal was safety on average or high assurance.

            2. 9

              (via the vector::at method)

              Which you have to remember to use instead of the far easier nicer looking []. I’ve seen v.at in a C++ codebase exactly once in my life. As usual, the default C++ is unsafe.

              We can even look to the design decisions made in Rust and see they draw their roots from modern C++ (think RIAA)

              True but not sure of the relevance.

              All the protections of Rust already exist in C++

              They most certainly do not.

              tooling can catch when developers don’t use them

              There are no tools that I’m aware of that guarantee an absence of bugs in a C++ codebase. I’ve used most of them. Bugs still happen. Bugs that don’t happen in safer languages.

              1. 8

                All the protections of Rust already exist in C++

                I’m not sure that’s entirely true, but I could be wrong. I agree with what you are saying about RIAA, but a few other things pop to mind:

                • Rust tries really hard to ensure you have either only immutable references, or a sole mutable reference. This prevents the old “iterate over a collection and mutate it” problem.

                • It also forces you to think about sharing semantics across threads (i.e. Sync trait).

                • In Rust you can design APIs which “consume” their parameters. i.e. you pass ownership to a callee, and it never hands it back. This is useful for scenarios where you don’t want the user to try and re-use a conceptually finalized data structure. Perhaps reuse of the structure could be unsafe for example.

                I’m sure there will be other examples. And I’m sure the proponents of Rust will provide those in more comments ;)

                Maybe you can do these kinds of things in modern C++?

                1. 6

                  Problem with C & C++ is not only about memory safety. Are you 100% sure you have absolutely no chance of a signed int overflow anywhere in your codebase?

                  1. 15

                    Compile with -fwrapv to instantly unlock the same behavior as rust.

                    1. 5

                      To get the K&R C behavior not the house-of-horrors C standard.

                      1. 3

                        Or better, -ftrapv to trap even errors that Rust won’t catch. (I don’t understand why this wasn’t made the default behaviour in Rust, to be honest; the performance hit should ultimately be negligible, and they already make some choices for safety-over-performance).

                        1. 4

                          The reason it is not default in Rust is that performance hit was not negligible.

                          1. 0

                            Got some numbers? Part of the problem is probably that the code generator (LLVM) doesn’t handle it very well; but basing a language decision on a current limitation of the code generator doesn’t seem wise. (Especially if it’s a choice between “fail to detect some potentially critical code issue” and somewhat reduced performance).

                            1. 5

                              I don’t have the reference for Rust handy, but NIST Technical Note 1860 is one good reference. To quote the abstract:

                              We performed an experiment to measure the application-level performance impact of seven defensive code options… Of the seven options, only one yielded clear evidence of a significant reduction in performance; the main effects of the other six were either materially or statistically insignificant.

                              That one option is -ftrapv.

                              1. 1

                                That report is specific to GCC, though, which has worse overflow checking even that LLVM.

                                “In practice, this means that the GCC compiler generates calls to existing library functions rather than generating assembler instructions to perform these arithmetic operations on signed integers.”

                                Clang/LLVM doesn’t do that. Needless to say, it explains much of the performance hit, but not how it would affect Rust. I’m curious about what the real numbers do look like.

                              2. 1

                                (actually, having done some recent ad-hoc tests LLVM seems to do ok, mostly just inserting a jo instruction after operations that can overflow, which is probably about as good as you can do on x86-64. I’d still like to see some numbers of how this affects performance, though).

                        2. 2

                          I agree memory safety is not the only kind of security bug, but it is the focus of this article, which is why I focused on it in my comment.

                      2. 4

                        And how do you know they’re wrong?

                        We see evidence all the time. Easily preventable bugs in old software everybody used but nobody wanted to make good. Bugs that would’ve been obvious if people had looked at compiler warnings or run a static analyzer, but nobody did..

                        1. 24

                          And how do you know they’re wrong?

                          I don’t know about @szbalint, but I know they’re wrong because of ample experience with people who think they can write safe C code and can’t. I literally have never met or have worked with an engineer I’d trust to write C code, myself obviously included.

                          I’ve been compiling all C and C++ code with -Wall for at least the last 15 years, and the last 10 with -Wall -Wextra -Werror. I’ve used free and paid-for static analysers on production code. Tests, extensive code reviews, valgrind, clang sanitizers, the lot.

                          The end result were bugs, bugs, and more bugs, many of them security vulnerabilities. This problem isn’t solvable with developer discipline and tools. We’ve tried. We’ve failed.

                          1. 2

                            Has your experience with D been much better? I’ve been able to segfault D because I didn’t realise that SomeClass c; generates a null pointer. I really wish the language were modified to make this impossible without some extra attributes like @trusted.

                            That’s the only way I’ve managed to really “crash” D, though. The rest of the time I get pretty stack traces at runtime whenever I mess up, much more controlled and predictable error-handling.

                            1. 1

                              Has your experience with D been much better?

                              Vastly so.

                              I’ve been able to segfault D because I didn’t realise that SomeClass c; generates a null pointer.

                              That’s not considered an issue in D. Variables are all default-initialised so you’ll get a crash at the point where you forgot to assign the reference to a new instance. Deterministic crashes in the face of bugs is a good thing.

                              1. 1

                                A segfault is a pretty obscure thing, and I’m not sure it’s even exactly “deterministic”. Are all arches guaranteed to react the same way to dereferencing a null pointer? It’s not like it’s raising a D NullPointerException, if such a thing even exists, so I don’t get a stack trace telling me where the problem is, unlike every other D “crash”.

                                I still don’t understand why making this segfaulting easy is desirable. It feels like an obvious rake to step on.

                                1. 1

                                  A segfault is a pretty obscure thing

                                  To a scripter, maybe, but not to a systems programmer.

                                  and I’m not sure it’s even exactly “deterministic”

                                  Some are, some aren’t. The deterministic ones are of the good variety. Always crashing because of an unitialised reference is ok. Getting a segfault somewhere else in your app isn’t.

                                  It’s not like it’s raising a D NullPointerException, if such a thing even exists

                                  It doesn’t exist, or rather, its version is to just segfault.

                                  so I don’t get a stack trace telling me where the problem is, unlike every other D “crash”.

                                  % coredumpctl gdb
                                  % bt
                                  

                                  If you’re not running systemd then run the program again under gdb. Since it always crashes in the same place you’ll get the same result. If you’re on Windows you open the debugger right there and then and look at the backtrace.

                                  It feels like an obvious rake to step on.

                                  It isn’t, it always takes me 30s to fix.

                            2. 2

                              I don’t know what your use involves, but sometimes using e.g. Coverity is not much better than never using it. All those things have to be done in the context of a comprehensive build and test system and always done on every checkin. And there is no substitute for good programmers with solid education. I have met a number of really excellent C programmer who produce solid, trustworthy code.

                              And, as always in such discussions, none of these assertions make sense without a “compared to”. People write unsafe C code, compared to which people who write what?

                              1. 2

                                using e.g. Coverity is not much better than never using it

                                Not in my experience.

                                And there is no substitute for good programmers with solid education.

                                Few and far between.

                                I have met a number of really excellent C programmer who produce solid, trustworthy code.

                                I have not. And that includes meeting compiler writers and C++ luminaries.

                                People write unsafe C code, compared to which people who write what?

                                To people who write code in Ada, Rust, …

                              2. 1

                                The end result were bugs, bugs, and more bugs, many of them security vulnerabilities. This problem isn’t solvable with developer discipline and tools. We’ve tried. We’ve failed.

                                Then why is it that when the bugs that relate to language level issues are dissected in the public, they nearly always seem like they’d be prevented by discipline & disciplined application of tooling?

                                1. 4

                                  Even the best programmers have an error rate. Much lower than novice programmers, but it’s still there.

                                  1. 1

                                    You are stating the obvious here.

                                    The question is, why aren’t these errors being caught when it is so easy to do?

                                    1. 6

                                      The nature of the tooling seems like a prime candidate answer to that question. In C and C++ you often need to opt into constructs and static analyses in order to guard against memory safety bugs. Just look at all the tortured defenses in this thread alone. It’s some combination of “oh just use this particular subset of C++” all the way to “get gud.”

                                      In memory safe languages the tooling generally demands that you opt out in order to write code that is susceptible to memory safety bugs.

                                      It’s just another instance of “defaults matter.”

                                      1. 0

                                        I think we’re going in circles here. None of what you say is evidence that people can’t write safe code. You’re just saying they (most of them) won’t, which isn’t the contentious point.

                                        1. 5

                                          Seems like a distinction without a practical difference. You asked a question, “why aren’t these errors being caught when it is so easy to do,” and I answered with, essentially, “maybe it’s not as easy as you think.” I proposed a reason why it might not be easy.

                                      2. 4

                                        The question is, why aren’t these errors being caught when it is so easy to do?

                                        It isn’t easy. If it were, they’d be caught. The litany of security vulnerabilities everywhere shows us that we’re doing it wrong.

                                        1. 1

                                          Right. Why not?

                                      3. 2

                                        Did I claim developer discipline wouldn’t prevent bugs? What I’m claiming is that it’s humanly impossible to manage that level of discipline in any large codebase. And I’m claiming that because we have decades of experience telling us that it’s the case.

                                    2. 3

                                      You raise a good point. I think anyone who thinks their own code is perfect is wrong, but I can’t prove it. I suppose, also, that it could be vacuously true in the sense that maybe their own code is perfect, but they haven’t written production systems entirely by themselves. Was that what you were suggesting?

                                      1. 2

                                        You raise a good point. I think anyone who thinks their own code is perfect is wrong, but I can’t prove it.

                                        Can you write one line of code that is perfect? Why not two? Why not ten? Why not a thousand?

                                        Even then, there’s a continuum between perfect and unmitigated disaster. I’ll grant you that I don’t really believe in large scale software ever being perfect in a literal sense of the word. Not even bug free. However, I believe there are individuals who can produce safe software in C. And if given enough control over the platform the code runs on, they can lock things down to virtually guarantee that language level issues are not ever going to turn into full compromise (RCE) or secret leaks.

                                        If you need something for a case study, why not take a look at qmail? The fun thing is that the papers and writeups about qmail’s security practices don’t really mention things such as extensive use of static analyzers, fuzzers, and formal verification. Despite that, it has an incredible track record. I think there is much more that could be done to raise the bar.

                                        I suppose, also, that it could be vacuously true in the sense that maybe their own code is perfect, but they haven’t written production systems entirely by themselves. Was that what you were suggesting?

                                        Security costs time and isn’t sexy. Worst of all, you can’t measure it like you can measure performance or the length of the list of features. So even if someone out there is producing “perfect” code, it’s likely that the project goes mostly unheard of. If one were to dig up and point out such a project, people would just call it a toy / academic exercise / “nobody uses it.” People might say they care about security but they really don’t, they just use whatever is convenient or popular. And when you point out the bugs in that, well, developers being developers, they just pat each other on the back. “Dude, bugs happen! Give ’em a break!”

                                        It’s especially bad in any commercial setting, which is why I think it is indeed the case that the people who are capable of writing secure software are, in the end, not doing that, because they don’t get to write an entire production system on their own terms. I don’t think it’s a coincidence that the project I just mentioned is essentially a solo project by one person.

                                        I’m in that boat too, sort of. At day job there’s so much code that’d get you kicked out if it were my personal project with a security guarantee. I’m not at liberty to rearchitect and rewrite the software. The markets opt out of security.

                                        1. 6

                                          Can you write one line of code that is perfect? Why not two? Why not ten? Why not a thousand?

                                          It depends on the line. Perfection isn’t just about the semantics the code has to the compiler, but about how a future reader will interpret it. Even the shortest snippet of code can have subtleties that may not be obvious later, no matter how clear they seem to me today. Concretely, in the 90s, “everyone knew” that system() was a huge security vulnerability because of how it adds to the attack surface, and it wasn’t a big deal because code that was thought of as needing to be secure was hardened against it. But those very same semantics came as a very unwelcome surprise to “everyone” when Shellshock was publicized in 2014.

                                          Lots of vulnerabilities can be traced to people misunderstanding single lines of code.

                                          I very much agree with your point about it being hard to sell security. I think that’s by far the biggest factor in how horrible the current state of affairs is.

                                          1. 3

                                            Wasn’t the key shellshock problem that bash executed environment variables? 99% of these failures seem to come from parsing errors, eval, and convenience components. Why bash designers felt it was good to allow arbitrary functions to be passed in environment variables and then executed baffles me but probably came from feature creep. The same functionality could have been achieved more safely by include/load like mechanisms (not 100% safe either, but easier to lock down) or, better, running other programs.

                                            BTW: Bash scripts are memory safe.

                                            1. 2

                                              It depends on what you see as the most unexpected part of it. The reason it was such an emergency to patch it was that bash was exposed to unauthenticated incoming network connections in the first place.

                                    3. -3

                                      Where are all the safe Rust based applications and operating systems?

                                        1. 6

                                          I mean, so what? The pool of talented systems programmers is vanishingly small, and also going to be dominated by people using C/C++, so its not that surprising that it hasn’t eradicated all the competition yet. And as a sibling commenter pointed out, its doing pretty well for being all of 3 years old.

                                          1. 0

                                            There is a big difference between “has not eradicated the competition yet” and “still can’t point to 5 widely used and superior applications”.

                                            1. 10

                                              how many did python have within 3 years? Or ruby for that matter. How about clojure? None of those had “5 widely used and superior applications” within 3 1/2 years. It was probably a decade before python had 5, maybe a little less for ruby. And you didn’t respond at all to my point about the quantity of systems programmers. Rust is a kind of weird language, attempting to fill a very tight ecological niche with few potential adoptees in terms of actual programmers. I’m not in the least surprised that its still very much in an embryonic state in regards to a community and the works that would flow from that. And, fwiw, Servo and Fuscia aren’t exactly small scale applications, not to mention all of the companies using it internally for whom we have no data/reports other than job board postings.

                                              1. 1

                                                I’m not at all saying Rust is a failure - I don’t find it appealing, but that doesn’t mean anything. But it’s still in the experiment stage. You can declare C obsolete when you have that body of large successful, less buggy, products to point at.

                                                1. 9

                                                  The whole point of the article is that literally decades of experience have shown that it is effectively impossible to write safe secure code in C/C++. The goal in fact is to to declare C/C++ a security nightmare. Does that make it obsolete? Maybe, maybe not. There are options out there that give you far more safety guarantees than C/C++ will ever be able to do.

                                                  Knowing that it is quite possibly flat our irresponsible to use it for any new project where security is important. Oh, and security is nearly always an important concern for a software project.

                                                  1. 1

                                                    Let me try again: To make that claim be anything more than marketing, you’d need some (a) data and (b) some indication that C programs were WORSE than some alternative. In the absence of both, it’s just weak marketing. The evidence seems to show more that it is very difficult to develop large software systems that are safe and secure and, at the same time, find an appreciative audience - no matter what the language. As I pointed out before, until there are significant examples of safe/secure/successful Rust programs to compare with, such claims are just blowing smoke.

                                                    1. 2

                                                      While it’s not as strong a claim as you seem to want, I don’t think it’s incorrect to say that C and C++ in the hands of not superhuman developers tends to result in a whole class of bugs that Rust makes nearly impossible to create. Bugs that have appeared numerous times and continue to keep appearing in critical internet infrastructure.

                                                      It’s purely anecdotal but Rust has prevented multiple use after free, use before initialized, and buffer overrun errors in my own code multiple times just while playing around. It’s a bit disingenuous to suggest that C/C++ don’t have a problem and that Rust which provably prevents most of those problems isn’t a promising solution.

                                                      This isn’t blowing smoke it’s a recognition that there is a problem and possible solution to that problem that doesn’t involve cloning Dan Bernstein and firing all the other programmers.

                                                      1. 2

                                                        I kind of like the idea of cloning Dan Bernstein and firing all the other programmers. It’s a big idea with some verve and panache.

                                                      1. 4

                                                        I agree the comment overstated by saying “effectively impossible.” Really hard with higher failure rates than safe languages would be more accurate. Yet, it seems misleading for you to use Bernstein and Hipp since most people are nowhere near as good at QA as them. Bernstein is also a security genius.

                                                        C defenders countering with the vulnerability results of security geniuses in minimalist apps instead of what average C coder achieves vs average user of safe language isnt a good comparison.

                                                        1. 0

                                                          Hence my quest for examples rather than handwaving. Examples of real applications that are not minimalist. Specific examples.

                                                          1. 2

                                                            Ripgrep? Firefox? The former is shipped and used for search in VS Code, the latter is pretty rapidly moving to using rust where it can to improve performance, and reduce security issues. Just two projects I can name off the top of my head as a non rust programmer.

                                                            1. 0

                                                              The same two projects everyone names and I’m not dismissing either of them. I’m just pointing out that the triumphal declarations of the obsolescense of C and dawn of the Reign of Rust lack sufficient backing. If we just got enthusiastic reports about what people wrote in Rust and how well it worked that would be interesting and impressive. But this overblown marketing stuff is just irritating.

                                                2. 5

                                                  Well, I guess ripgrep qualifies? Hopefully there will be more such applications.

                                                  1. -1

                                                    Ripgrep is interesting. Is it considerably less bug ridden than ag ? Is it considerably less bug ridden than grep?

                                                    https://www.cvedetails.com/product/23804/GNU-Grep.html?vendor_id=72 Well grep doesn’t seem so bad.

                                                    Come on, people need to do better than this.

                                                    1. 7

                                                      Is it considerably less bug ridden than ag ?

                                                      Yes, by a very very large margin across at least a couple different spectrums.

                                                      Is it considerably less bug ridden than grep?

                                                      Unlikely.

                                                      Come on, people need to do better than this.

                                                      Right. Nothing will ever be good enough. This is a classic Internet debate tactic. No matter what example anyone gives you, there will always be a reason to dismiss it. I don’t say that flippantly necessarily. Satisfying your standard (mentioned in another comment) to a high degree is nearly impossible. There will always be differences and variables that cannot reasonably be controlled for.

                                                      It’s totally fine to have high standards that are impossible to satisfy (it’s your choice), but at least state that up front explicitly and don’t be coy about it. And don’t accuse people who are trying to hit a lower standard of evidence as just “blowing smoke,” because that’s a bunch of bullshit and you know it.

                                                      1. -2

                                                        Oh come on: ripgrep is the example I’m given. Yet, grep, written in horrible pathetic C, has very few security bugs. Are there CVEs for ag even? I am ready to be persuaded and I don’t need ironclad proofs - I just want to see a number of examples of complex applications/systems written in XYZ that are significantly better than the standard C versions in terms of security. Until those are available it’s just marketing. Maybe Rust is a brilliant advance that will lead to the creation of highly reliable secure applications - maybe it is not. Without evidence, all this claim that C has been obsoleted by Rust or whatever is exactly that - just blowing smoke. Read software jewels by Parnas, this is not a new problem.

                                                        I’ll give you a good example: the OP makes a big deal about security issues in ImageMagick! Of all things. I cannot imagine that anyone involved in the development of ImageMagick worried about security at all - it was a tool people could use to manipulate images. Now it is being dragged into service as an online utility exposed to the open internet or used on images that come from anywhere and - lo and behold - because of C it’s insecure! Great, let me see a Rust program designed without any attention to security employed similarly and we’ll compare.

                                              2. 1

                                                Quite a few swift based applications out there though, so it isn’t like new languages aren’t being used for things.
                                                Go is quite popular in certain spaces too, for what it’s worth.

                                            1. 19

                                              Seems like it’s mostly just “NumPy happened”. And people started building things on top of NumPy, and then things on top of these things…

                                              Also, machine learning doesn’t need types as much as something like compilers, web frameworks or GUI apps. The only type that matters for ML is matrices of floats, they don’t really have complex objects that need properties and relationships expressed.

                                              1. 11

                                                Types can be used for more than just stating that layers are matrices - have a look at the Grenade Haskell library which lets you fully specify the shape of the network in types, and you get compile time guarantees that the layers fit together so you don’t get to the end of a few days of training to to find your network never made sense.

                                                1. 2

                                                  I’ve always thought that Idris would be the best language ever for ML

                                                  1. 1

                                                    Sadly Idris is not great wrt. unboxed data types. Lots of the dependent stuff it implements involves lots of boxing and pointer chasing… not the greatest for high performance computing. That’s not inherent to dependent types, but it’s something language designers need to tackle in the future if they want to meet the needs of high performance computing.

                                                    1. 2

                                                      Ah well yes, I was thinking more about an API layer for stuff like Tensorflow or Torch, where the Idris type system validates a DAG of operations at compile time and then it’s all translated with the bindings.

                                                    2. 1

                                                      The exascale, project languages like Chapel were my guess since they (a) make parallelism way easier for many hardware targets and (b) were advertised to researchers in HPC labs. Didn’t happen. Still potential, there, as multicore gets more heterogenous.

                                                  2. 3

                                                    The only type that matters for ML is matrices of floats, they don’t really have complex objects that need properties and relationships expressed.

                                                    Is this fact inherent to the study of AI & ML, or is it just how we’ve decided to model things?

                                                    1. 3

                                                      I guess it’s inherent to the modern hardware. The reason for this deep learning hype explosion is that processors (fully programmable GPUs, SIMD extensions in CPUs, now also more specialized hardware) have gotten very good at doing lots of matrix math in parallel, and someone rediscovered old neural network papers and realized that with these processors, we can make the networks bigger and feed them “big data” and the result is pretty good classifiers

                                                      1. 1

                                                        On top of cheaper. You can get what used to be an SGI Origin or Onyx2 worth of CPU’s and RAM for new car prices instead of new, mini-mansion prices. Moores law with commodity clusters lowered barrier to entry a lot.

                                                      2. 2

                                                        It is inherent to problems that can be represented in linear algebra. But many problems have different representations, like decision tres for example. Regression and neural networks can be written as matrix operations mostly.

                                                        1. 1

                                                          I concede that matrices are the most fundamental and optimizable representation for ML. They are literal grids of values, after all; you can’t get much denser than that! However, is it still possible that they do not always lend themselves to higher-level modeling?

                                                          For instance, any useful general-purpose computation boils down to some equivalent of machine code—or a Turing machine, for the theoretically-minded. Despite this, we purposefully code in languages that abstract away from this fundamental, optimizable representation. We make some sacrifices in efficiency* in order to enables us to more effectively perform higher-order reasoning. Could (or should) the same be done for ML?

                                                          (*Note: sometimes, by letting in abstractions, we actually find new optimizations we hadn’t thought of before, as they require a higher-level environment to conceive of and implement conveniently & reasonably. See parallelism-by-default and lazy streams, as in Haskell. Parsing is yet another example of something that used to be done on a low-level, but that is now done more efficiently & productively due to the advent of higher-level tools.)

                                                          1. 2

                                                            ML is not limited to neural networks. Other ML models use different representations.

                                                            Matrices are an abstraction as well. It doesn’t really say that they are represented as dense arrays. In fact many libraries can use sparse arrays as needed. And performance comes not only from denser representation but from other effects like locality and less overhead compared to all the boxing/unboxing of higher level type systems or method dispatching from most OOP languages.

                                                            There is more abstraction at various levels. Some libraries allow the user to specify a neural network in terms of layers. Also matrices are algebraically manipulated as symbolic variables and that makes formulas look simpler.

                                                            I guess a few libraries support some kind of dataflow-ish programming by connecting boxes in a graph and having variables propagate as in a circuit. That is very close to the algebraic representation if you think of the formulas as abstract syntax trees for example.

                                                            Maybe more abstraction could be useful in defining not only the models but all the data ingestion, training policies, and production/operations as well.

                                                    1. 1

                                                      Are there any good documents about the big changes that are planned for 12? There isn’t much on the wiki or draft changes page.

                                                      1. 4

                                                        This doc is slowly getting filled in. https://www.freebsd.org/releases/12.0R/relnotes.html

                                                        1. 2

                                                          in addition to what trousers mentioned, the main big, expected change in FreeBSD 12 is going to be drm-next-kmod, providing support for newer Intel and Radeon graphics chipsets.

                                                        1. 3

                                                          This looks interesting. Of course it’s a shame it’s based on Intel, but:

                                                          • PCI-e
                                                          • SATA
                                                          • 2 x gigabit ethernet
                                                          • x86
                                                          • VT-x + VT-d
                                                          • 32 GB ram
                                                          • 4 okay-ish cores

                                                          At first glance this looks like the first SBC that actually will be usable for stuff like routers, virtualization host/hypervisor (in a cluster for example) or a simple linux desktop stuck to the back of a monitor. Price will be important though, since you also need to get memory while a lot of other SBC’s have memory on the PCB.

                                                          1. 8

                                                            The fact that its based on Intel is, imho, a good thing .. I’ve got a drawer full of SBC’s that started out with lots of promise - ultimate power, great battery life, etc - but are sitting there unused because the vendors failed to keep the kernel promises.

                                                            That’ll be less likely to happen with an Intel-based SBC, imho.

                                                            1. 4

                                                              Most ARM SoCs are decently supported by mainline operating systems. Which boards do you have and what would you like to use them for?

                                                              1. 2

                                                                Which ARM SoCs do you have that are supported on mainline? I’ve had nothing but all kinds of issues with ARM. I tried using an overpriced SolidRun as a router and ran into nothing but issues and terrible support.

                                                                I wrote another post on seeing these issues in Android devices. ARM is not a platform. It’s just random shit soldered to random pins. At least Microsoft phones had ARM + UEFI. I mean we have device tress, but they’re usually broken to hell too and most phone vendors don’t use them.

                                                                Is the particular device in this post a 3rd party x86 clone? Is it free of Management Engine or other 3rd party controllers? I realize all x86 stuff has non-free binary blobs everywhere, where as you can get a lot of totally free ARM chips/boards, but long term support is often an issue. With x86+UEFI or even classic BIOS, you can run mainline Linux on them for years to come. There are even forks of Linux for older unsupported 386 chips if you really want to buy a ton of old 386 stock and use them in embedded applications. ARM is a clusterfuck by comparison.

                                                                1. 3

                                                                  Rockchip RK3399/RK3328, Allwinner H3/H5/A64, Nvidia Tegra X1, the Broadcom junk that’s in the RPi…

                                                                  I run FreeBSD (actually I worked on RK3399 support), so there’s no non-mainline :) but for Linux, Rockchip is actually mainlining their official drivers, and for Allwinner it’s the community.

                                                                  Of course the cheap embedded boards aren’t as good as the high end server stuff (ThunderX/2/Centriq/eMAG/…), but there is a lot of support.

                                                                  1. 2

                                                                    OLIMEX has some interesting hardware and according to SUNXI Buying guide “Currently, Olimex is the only company creating Allwinner based OSHW, and Olimex actively contributes to the sunxi project.”.

                                                                    For some cheaper but less open options(I use an orange pi zero as a home media server/nas/cups/whatever) armbian provides quite decent support.

                                                                  2. 2

                                                                    I bought the original PINE64 and found the is support to be pretty terrible, even today it feels like it’s all been hacked together by guests in China rather than the manufacturer doing much about it.

                                                                    1. 1

                                                                      It’s very well supported in FreeBSD.

                                                                      For Linux, just don’t go to the vendor, ever. Check Arch Linux ARM and Armbian. (Apparently Ethernet support was merged into mainline as late as 4.15, but it’s there now)

                                                                  3. 4

                                                                    I think the parent was implying AMD would have less microcode updates and more trustworthiness due to better QA than Intel. Likely inspired by Meltdown/Spectre vulnerabilities. Also, AMD has been in the low-power, SoC game for some time. I don’t know if you’ll get lots of problems out of them that you wouldn’t out of Intel. It would surprise me a bit. I remember Soekris was using AMD Geodes.

                                                                    Oh shit:

                                                                    “Due to declining sales, limited resources available to design new products, and increased competition from Asia, Soekris Engineering, Inc. has suspended operations in the USA as of today.”

                                                                    Glanced at their page to see product updates. Got sadder news than I was looking for.

                                                                    1. 5

                                                                      I don’t know much about the Soekris boards, but pcengines.ch sells surprisingly affordable AMD Jaguar-based boards for embedded and network applications. I’m using one for my OPNSense firewall and have been perfectly happy with it.

                                                                      1. 1

                                                                        Thanks for the tip!

                                                                        1. 3

                                                                          From corebooting my ALIX2C3 I recalll the geode microcode has another issue in that it’s reliant on legacy tooling to build so you are encouraged to just use the blob (tooling is either DOS based or related to visual studio, can’t recall).

                                                                    2. 2

                                                                      If I remember properly HardKernel had everything for their C2 platform mainlined so you could use modern kernels without having to use a vendor specific one.

                                                                    3. 2

                                                                      it’s a shame it’s based on Intel […] Price will be important though

                                                                      I too immediately thought “why not Ryzen?” but, price is actually the reason they went with Intel, according to the blog post that’s linked here. Excerpt:

                                                                      2017 December, We considered AMD Ryzen 5 2500U 3.5Ghz mobile processor. The performance was very impressive, but the price of the CPU was also very impressive. Fortunately, Intel also announced the Gemini Lake processors. It was slower than Ryzen but much faster than Intel Apollo Lake, and the price was reasonable.

                                                                      Looks like the board will be considerably cheaper due to the Intel chip.

                                                                    1. 2

                                                                      This is very cool, the CCC stuff Conal is working on is very exciting, it allows for making transformations of programs without changing them, into very useful representations; automatically differentiate numeric functions (really important for ML), produce hardware descriptions of a given function or data flow graphs, or combine all three to take an arbitrary Haskell function, compress is derivative function, compute the hardware description of both f and f’, and then generate dot files to render the hardware graph, and while we’re at it now we can produce proofs of correctness too!

                                                                      1. 20

                                                                        “(For the record, I’m pretty close to just biting the bullet and dropping $1800 on a Purism laptop, which meets all my requirements except the fact that I’m a frugal guy…)”

                                                                        One more thing to consider: vote with your wallet for ethical companies. One of the reasons all the laptop manufacturers are scheming companies pulling all kinds of bloatware, quality, and security crap is that most people buy their stuff. I try where possible to buy from suppliers that act ethically to customers and/or employees even if it costs a reasonable premium. An recent example was getting a good printer at Costco instead of Amazon where price was similar. I only know of two suppliers of laptops that try to ensure user freedom and/or security: MiniFree and Purism. For desktops, there’s Raptor but that’s not x86.

                                                                        Just tossing the philosophy angle out there in case anyone forgets we as consumers contribute a bit to what kind of hardware and practices we’ll see in the future every time we buy things. The user-controllable and privacy-focused suppliers often disappear without enough buyers.

                                                                        1. 10

                                                                          One more thing to consider: vote with your wallet for ethical companies

                                                                          Don’t forget the ethics of the manufacturing and supply chain of the hardware itself. I would imagine that the less well-known a Chinese-manufactured brand is the more likely it is to be a complete black box/hole in terms of the working conditions of the people who put the thing together, who made the parts that got assembled, back to the people who dug the original minerals out of the ground.

                                                                          I honestly don’t know who (if anyone) is doing well here - or even if there’s enough information to make a judgement or comparison. I think a while back there was some attention to Apple’s supply chain, I think mostly in the context of the iPhone and suicides at Foxconn, but I don’t know where that stands now - no idea if it got better, or worse.

                                                                          1. 6

                                                                            Apple has been doing a lot of work lately on supplier transparency and working conditions, including this year publishing a complete list of their suppliers, which is pretty unusual. https://www.apple.com/supplier-responsibility/

                                                                            1. 1

                                                                              Technically their list of suppliers covers the top 98% of their suppliers, so not a complete list, but still a very good thing to have.

                                                                              1. 1

                                                                                Most other large public companies do that too, just not getting the pat on the back as much as Apple.

                                                                                http://h20195.www2.hp.com/v2/getpdf.aspx/c03728062.pdf

                                                                              2. 2

                                                                                You both brought up a good concern and followed up with reason I didn’t include it. I have no idea who would be doing good on those metrics. I think cheap, non-CPU components, boards, assembly and so on are typically done in factories of low-wage workers in China, Malaysia, Singapore, etc. When looking at this, the advice I gave was to just move more stuff to Singapore or Malaysia to counter the Chinese threat. Then, just make the wages and working conditions a bit better than they are. If both are already minimal, the workers would probably appreciate their job if they got a little more money, air conditioning, some ergonomic stuff, breaks, vacations, etc. At their wages and high volume, I doubt it would add a lot of cost to the parts.

                                                                              3. 9

                                                                                Funnily enough

                                                                                The Libreboot project recommends avoiding all hardware sold by Purism.

                                                                                1. 5

                                                                                  Yeah, that is funny. I cant knock them for not supporting backdoored hardware, though. Of the many principles, standing by that one make more sense than most.

                                                                                  1. 1

                                                                                    Correct me if I’m wrong, but I thought purism figured out how to shut down ME with an exploit? Is that not in their production machines?

                                                                                  2. 3

                                                                                    I agree, which is why I bought a Purism laptop about a year ago. Unfortunately, it fell and the screen shattered about 5 months after I got it, in January of this year. Despite support (which was very friendly and responded quickly) saying they would look into it and have an answer soon several times, Purism was unable to tell me if it was possible for them to replace my laptop screen, even for a price, in 6 months. (This while all the time they were posting about progress on their phone project.) Eventually I simply gave up and bought from System76, which I’ve been very satisfied with. I know they’re not perfect, but at least I didn’t pay for a Windows license. In addition my System76 laptop just feels higher quality - my Librem 15 always felt like it wasn’t held together SUPER well, though I can’t place why, and in particular the keyboard was highly affected by how tight the bottom panel screws were (to the point where I carried screwdrivers with me so I could adjust them if need be).

                                                                                    If you want to buy from Purism, I really do wish you the best. I truly hope they succeed. I’m not saying “don’t buy from Purism”; depending on your use case you may not find these issues to be a big deal. But I want to make sure you know what you’re getting into when buying from a very new company like Purism.

                                                                                    1. 1

                                                                                      Great points! That support sounds like it sucks to not even give you a definitive answer. Also, thanks for telling me about System76. With what Wikipedia said, that looks like another good choice for vote with your wallet.

                                                                                    2. 2

                                                                                      Raptor but that’s not x86

                                                                                      Looks like it uses POWER, which surprised me because I thought that people generally agreed that x86 was better. (Consoles don’t use it anymore, Apple doesn’t use it, etc)

                                                                                      Are the CPUs that Raptor is shipping even viable? They seem to not have any information than “2x 14nm 4 core processors” listed on their site.

                                                                                      1. 4

                                                                                        The FAQ will answer your questions. The POWER9 CPU’s they use are badass compared to what’s in consoles, the PPC’s Apple had, and so on. They go head to head with top stuff from Intel in the enterprise market mainly sold for outrageous prices. Raptor is the first time they’re available in $5000 or below desktops. Main goal is hardware that reduces risk of attack while still performing well.

                                                                                    1. 1

                                                                                      I can’t seem to get any of the examples to load workout errors.

                                                                                      1. 1

                                                                                        Replied on the issue.

                                                                                      1. 2

                                                                                        It would be interesting to see a comparison of this with the single-IORef-with-atomicModifyIORef pattern, which has been shown in the past to outpeform many specialised concurrent structures - it turns out that using purity and mutability gives you excellent ‘mutable’ concurrent structures.

                                                                                        1. 4

                                                                                          Australians have been doing this for years, thanks Telstra!

                                                                                          1. 6

                                                                                            It hasn’t improved much IMO.

                                                                                            Haskell is normally extremely strong when it comes to well-designed and reusable abstractions. Unfortunately that appears to be more or less absent from the Haskell crypto libraries. There are a few different monad classes for pseudorandom number generation, for example, and all of them are overcomplicated. I often end up just rolling my own (monad, not PRNG) when I need clean random number generation.

                                                                                            There are a few decent libraries available for sundry concrete cryptographic tasks, but well below par for the Haskell ecosystem.

                                                                                            In fairness, cryptography libraries are bad across almost all languages, but I expect more from Haskell.

                                                                                            1. 4

                                                                                              Is it fair to suggest that Haskell expects more from you, too? I mean, you’re certainly welcome to contribute.

                                                                                              1. 3

                                                                                                In fairness, cryptography libraries are bad across almost all languages, but I expect more from Haskell.

                                                                                                Why? Does Haskell have any special features that make it fundamentally easier to correctly implement cryptography algorithms compared to other high-level languages? Parametricity doesn’t particularly help when all your routines map tuples of integers to tuples of integers.

                                                                                                1. 4

                                                                                                  Does Haskell have any special features that make it fundamentally easier to correctly implement cryptography algorithms compared to other high-level languages?

                                                                                                  Yes, e.g. QuickCheck, QuickSpec, LiquidHaskell, etc.

                                                                                                  1. 4

                                                                                                    These get you some of the way but there’s a whole class of dude channel attacks we have very little ability to reason about in Haskell. Timing is something I have no idea how to talk about how to make a constant time Integrr multiplication algorithm and be sure it is constant time.

                                                                                                    My dream for this sort of work is a inline-rust package, in the vain of inline-c, so we get memory safety but also a language which better allows timing analysis.

                                                                                                    1. 2

                                                                                                      inline-rust is something I want in every programming language. :)

                                                                                                      I think it’s possible that a subset of Haskell in which you only use primitive types and primops (like Word32# and MutableByteArray# and so on) and can’t have any laziness anywhere (because no values are ever boxed) might be more amenable to timing analysis.

                                                                                                      I’m not sure if there is a pragma or Language setting in GHC that can automatically enforce that everything in a given file uses only primitives and primops.

                                                                                                      1. 2

                                                                                                        Check out Jasmin for language implementing high-assurance crypto. Once again, it’s achieved with a language quite opposite of Haskell’s high-level style.

                                                                                                        1. 2

                                                                                                          That would be cool indeed—but I can already viscerally imagine the impact on build times from invoking the Rust compiler via Template Haskell… :)

                                                                                                        2. 3

                                                                                                          In 2017, QuickCheck is by no means specific to Haskell. Nowadays you can find property-based testing libraries and frameworks for just about any language.

                                                                                                          As for LiquidHaskell, the real verification is performed by an external SMT solver. So again I don’t think this constitutes a Haskell-specific advantage.

                                                                                                        3. 3

                                                                                                          Because Haskell libraries are, in general, much higher quality than libraries in other ecosystems I use. Correctness also isn’t the concern — I have little doubt that the crypto libraries are correct. The concern is usability. Most of the Haskell crypto libraries are clumsy, typically because they just wrap some C library without providing any additional abstractions.

                                                                                                          1. 2

                                                                                                            So you are confident the underlying C library is correct?

                                                                                                      1. 2

                                                                                                        I’ve done a very similar thing using Alpine and multi-stage builds. Using alpine:edge also gives you upx, which lets you compress your binary after the build, which you copy into the next stage, and using ldd you can figure out exactly which libraries are needed for dynamic linking, and you can end up with some really tiny images. Also using GHC’s split objects you can build even smaller binaries. Maybe I should write a post about that sometime soon, once we actually put this into production…

                                                                                                        1. 1

                                                                                                          Please do create that blog post! I’ve not used upx, got any links that explain that?

                                                                                                        1. 1

                                                                                                          Can someone explain what bridgeOS is?

                                                                                                          1. 2

                                                                                                            eOS 1.0 (which was renamed BridgeOS) was driving the touch bar in 2016/2017 MacBook Pros. BridgeOS 2.0 is the OS for essentially a system management controller capable of booting the x86 CPU and controlling a set of I/O, mainly so Apple can control the security model and do things like “Hey Siri.” (It uses an A10 Fusion SoC.)

                                                                                                          1. 2

                                                                                                            Hmm, I should dig out my pine64…

                                                                                                            1. 2

                                                                                                              I was thinking the same thing. I bought it with the intention of running a media server from it (hardware accel of video encode was something that made me buy it), but found working with Linux even more painful than i’d remembered. Having another OpenBSD machine will be a pleasure.

                                                                                                            1. 2

                                                                                                              Nice work mate. The only thing that stood out to me was defining updateMap when insertWith exists and would be more efficient by avoiding the member lookup first.

                                                                                                              1. 1

                                                                                                                Thank you! I stared at insertWith and ruled it out for some reason. Ended up resorting to updateMap which I wrote for a different blog post. I wrote this while streaming (2 hours and change). I’ll upload the video if this noise reduction filter doesn’t make the audio sound watery this time.

                                                                                                                Update: good catch, I’ve fixed it and replaced it with insertWith

                                                                                                              1. 3

                                                                                                                The exploit author Patrick W also makes an interesting bunch of OSX / MacOS security tools outside of his current employment, if you weren’t aware: www.objective-see.com

                                                                                                                1. 2

                                                                                                                  He also has a patreon which i’m really happy to contribute to - i’ve got several of his apps installed and they do a great job of doing things like pointing out any changes to any of the locations/systems (launchd) which can be used to start malware persistently. His new LuLu firewall looks really nice, but is probably a bit too alpha for me at this stage.

                                                                                                                1. 17

                                                                                                                  The wikipedia comparison is pretty good, but it’s still lacking. Some things that are important to me in a serialisation protocol:

                                                                                                                  • Binary: As soon as you open the door to text you get questions about encoding and whitespace, and it becomes very difficult to process efficiently. The edge cases will haunt you in a way they never will with a binary protocol.
                                                                                                                  • Self-describing: It should be possible for a program to read an arbitrary object (unlike XDR, Protocol Buffers, etc)
                                                                                                                  • Efficiency: JSON/MessagePack/XML are out, but so is DER (ASN.1) because integers are variable length
                                                                                                                  • Explicit references/cycles (e.g. plain JSON, but not Capt’nProto or PHP’s serialize)
                                                                                                                  • Lots of types: ASN.1 has the right idea here, but it still falls short.
                                                                                                                  • Unambiguous: Fuck MessagePack. Seriously.

                                                                                                                  On the subject of types: k/q supports booleans, guids, bytes, shorts(16bit), ints(32bit), longs(64bit), real(32bit), float(64bit), characters, symbols, timestamps, months, dates, timespans (big interval), minutes, seconds, times, all as arrays or as scalars. It also supports enumerated types, plain/untyped lists, and can serialise functions (since the language is functional). None of the blog-poster’s suggestions can stand up to the kdb ipc/protocol, so clearly we needed at least one more protocol, but now what?

                                                                                                                  Something else I’m thinking about are capabilities/verified cookies. I don’t know if these can/should be encoded into the IPC (I tried this for a while in my ad server), but there was a clear advantage in having the protocol decoder abort early, so maybe a semantic layer should exist where the programmer can resolve references or cookies (however if you do it as a separate pass of the data, you’ll have efficiency problems again).

                                                                                                                  I think that if you can get away with an existing format, you should use it because you get to inherit all of the tooling that goes with it, but dogma that suggests serialisation is a solved problem is completely and obviously wrong.

                                                                                                                  1. 9

                                                                                                                    Cycles are hard to use safely. In my opinion, it’s much better to encode them explicitly when you need them (not that often) than to include them in the format itself. Other than that, I agree completely.

                                                                                                                    It is also important for the format to have a canonical form for when you deal with cryptography. Also, not having various lengths of numeric data types that are all treated differently is a great boon for current scripting languages.

                                                                                                                    Have you seen RFC 7049: Concise Binary Object Representation (CBOR)? It has JSON semantics with additional support for chunked transfers and sub-typing (interpret this string as a date or something).

                                                                                                                    RFC 8152: CBOR Object Signing and Encryption (COSE) also sounds promising. I believe that it’s time for DER and the whole ASN.1 world to go.

                                                                                                                    1. 7

                                                                                                                      I was surprised not to see CBOR in the list of formats, it’s actually an incredibly elegant encoding which can be efficiently decoded and provides a huge amount of flexibility at the same time (and is sensible enough to leave space to add more things if they become necessary). Haskell’s serialise and cborg libraries have adopted it, and I hope these will become the canonical serialisation format for Haskell data, replacing the really ad hoc and less efficient encoding currently offered by the binary and cereal packages.

                                                                                                                      CBOR is a protocol done right, with standardisation and even IANA registry for tags and other stuff. It’s also part of the COAP REST-for-IoT-but-efficient standard (thing - I’m not familiar enough with exactly what COAP does).

                                                                                                                      Edit: video describing the protocol and how it’s likely to be used in Haskell https://youtu.be/60gUaOuZZsE

                                                                                                                      1. 0

                                                                                                                        What do you mean by “JSON semantics”? JSON has really terrible semantics, especially around numbers.

                                                                                                                        1. 5

                                                                                                                          I should have said JSON-compatible semantics.

                                                                                                                          Most of the types in CBOR have direct analogs in JSON. All JSON values, once decoded, directly map into one or more CBOR values.

                                                                                                                          The conversion from CBOR to JSON is lossy. CBOR supports limited-size integers, arbitrary-precision integers and floats. It also has support for NaN and Infinities.

                                                                                                                          1. 1

                                                                                                                            By this definition, is there any serialization format that is not JSON-compatible?

                                                                                                                      2. 1

                                                                                                                        Always great to see another k programmer around. Been several years but man that was a trip.

                                                                                                                      1. 2

                                                                                                                        aircrafts, helicopter, container ships: You don’t need to know theory to use them. Just turn on the engine and GO!!!

                                                                                                                        1. 1

                                                                                                                          A more fair comparison would be numbers, vectors and matrices, you certainly need to know a lot of theory to fly, but very little to make use of these objects.

                                                                                                                        1. 2

                                                                                                                          The results of this talk (and paper) are truly amazing. Can’t wait to see it get some good use.

                                                                                                                          1. 5

                                                                                                                            Google is stopping one of the most controversial advertising formats: ads inside Gmail that scan users’ email contents. The decision didn’t come from Google’s ad team, but from its cloud unit, which is angling to sign up more corporate customers.

                                                                                                                            You think they’d do it out of decency… nope.

                                                                                                                            1. 6

                                                                                                                              At this point in time we have already collected unough information about our customers through their most personal emails and have noticed new emails aren’t adding anything to our models any more

                                                                                                                            1. 29

                                                                                                                              tl;dr: wants generics

                                                                                                                              1. 11

                                                                                                                                Maybe we should add a new tag, “go-generics”… :-)

                                                                                                                                1. 5

                                                                                                                                  That’s not really true. He wants some way to avoid writing the same code multiple times. Generics is a solution to that, but not the only possible solution.

                                                                                                                                  1. 10

                                                                                                                                    it’s half the truth, i’ll give you that :)

                                                                                                                                    i’m just a bit annoyed by the go complaints, it’s the ever repeating same. with go you sometimes have to copy some code, but that can be minimized if done good. go sometimes is verbose, but that’s imho the tradeoff for the small syntax and orthogonality.

                                                                                                                                    last but not least: maybe just use another tool if go doesn’t work for you ;)

                                                                                                                                    1. 7

                                                                                                                                      i’m just a bit annoyed by the go complaints, it’s the ever repeating same. with go you sometimes have to copy some code

                                                                                                                                      Copy code, copy complaints. The price you pay for a simple language.

                                                                                                                                      last but not least: maybe just use another tool if go doesn’t work for you ;)

                                                                                                                                      I can’t speak for the author’s situation but sometimes these posts are born more from other’s imposing solutions. There are plenty of good projects that if you want to contribute to, you have no choice but to use what the chose. And, more often in the case of Java, your employer might force a technology. These blog posts are sometimes a desperate plea for help.

                                                                                                                                      1. 2

                                                                                                                                        And, more often in the case of Java, your employer might force a technology. These blog posts are sometimes a desperate plea for help.

                                                                                                                                        i’m aware of that, but then, there are so many posts with say “generics are the solution”, that i wonder why the author did have to include that point. the language isn’t going to change for the foreseeable future, and it’s better to just show others how to solve these problems in the current boundaries of the language (imho). it’s just more productive and helpful.

                                                                                                                                        having to use a tool you are not familiar with (and not having the time to proper study the docs) sucks, maybe this is the problem for the author of the article. i guess i’d be lost when i’d have to use c# and would complain, too :)

                                                                                                                                      2. 3

                                                                                                                                        i’m just a bit annoyed by the go complaints, it’s the ever repeating same.

                                                                                                                                        It’s not unique to Go. People’s complaints about C++, Java, Ruby, &c. have all been more or less the same in the last ten years as far as I can tell. C++ is confusing, Java is cumbersome, Ruby crashes and uses too much memory.

                                                                                                                                        As long as the languages don’t change, the complaints won’t, either.

                                                                                                                                        1. 0

                                                                                                                                          Only one of the languages you’ve mentioned has been wilfully ignorant of the history of language design, to the point of being proud of its ignorance.

                                                                                                                                          1. 3

                                                                                                                                            I keep seeing this meme repeated without sources, which bothers me - sourcing claims like this makes the difference between a cogent argument and an ad hominem attack.

                                                                                                                                            1. 1

                                                                                                                                              It would be good to have a more “sourced” and supported debate about Go in general. That would change the “pro” camp’s approach as well, though. No more “Go was supposed to be X so X is how it is supposed to be. If you don’t like it, don’t argue about it – use another language.”.

                                                                                                                                              1. 1

                                                                                                                                                Absolutely.

                                                                                                                                                A good example of sourcing, for the ‘go generics’ debate:

                                                                                                                                                Four proposals for generics with unacceptable tradeoffs (skip to the end of each for the summary of why they don’t work out).

                                                                                                                                                1. 1

                                                                                                                                                  These make me wonder. Many languages have parameterized types; and in usable forms, to boot. The “unacceptable tradeoffs” must be acceptable in those languages – how is that? To say that those languages are bad, or do not achieve Go’s goals, is just assuming the conclusion.

                                                                                                                                                  Only two of these proposals (2010-06 and 2013-10) have sections set aside for comparisons to the literature, and they are thin. The first one mentions only C++, the second mentions C (?), C++ and Java. There’s a lot of other systems out there, though; many are decades old, like ML’s, and work quite differently from those of C, C++ and Java. More recent languages – like Haskell and Scala – provide further examples to work from.

                                                                                                                                                  These proposals by themselves – which of course does not make a conclusive case – do seem to indicate that Go is being developed with little reference to much of the work done in language design, and in a way that’s deliberately narrow. First, they don’t mention that work; and second, many of the reasons given for rejecting this or that parameterized types proposal – like some of the syntax complaints – seem quite finicky and particular (and eminently solvable).

                                                                                                                                            2. 1

                                                                                                                                              But let’s say it weren’t – people would still complain about it, and that wouldn’t be any kind of special discrimination against that language. People who are annoyed by complaints about their favourite language would do well to consider how other languages are treated before bringing a case before us.