1. 40
  1. 10

    What’s stopping Apple from making Swift a system level language to start replacing C? They don’t have to worry about portability because they only support specific architectures and those have to support Swift anyway

    1. 3

      I’m surprised that Jailbreaking keeps going on. I thought they’d soon run out of libtiff vulnerabilities and the fun would stop. But no, there’s a dozen hacks for the latest iOS. Despite the fact that Apple keeps adding more and more layers of encryption, trusted hardware, sandboxing…

      given sufficient bug density, security design is irrelevant

      1. 3

        It’s unfortunate that most of these posts clump C with C++. Yes, it does reference Modern C++ Won’t Save Us. The question I would love answered is, does modern C++ solve 80% of the problems? Because 80% is probably good enough IMO if solving for 99% distracts us from other important problems.

        1. 11

          The question I would love answered is, does modern C++ solve 80% of the problems?

          The reason this isn’t really answered is because the answer is a very unsatisfying, “yes, kinda, sometimes, with caveats.”

          The issues highlighted Modern C++ Won’t Save Us are not straw men; they are real. The std::span issue is one I’ve actually hit, and the issues highlighted with std::optional are likewise very believable. They can and will bite you.

          On the other hand, there is nothing keeping you from defining a drop-in replacement for e.g. std::optional that simply doesn’t define operator* and operator->, which would suddenly and magically not be prone to those issues. As Modern C++ Won’t Save Us itself notes, Mozilla has done something along these lines with std::span, too, preventing it from the use-after-free issue that the official standard allows. These structures behave the way they do because they’re trying to be drop-in replacements for bare pointers in the 90% case, but they’re doing it at the cost of safety. If you’re doing a greenfield C++ project, you can instead opt for safe variants that aren’t drop-in replacements, but that avoid use-after-free, buffer overruns, and the like. But those are, again, not the versions specified to live in std::.

          And that’s why the answer is so unsatisfying: with std::move, rvalue references, unique_ptr, and so on give you the foundation for C++ to be…well, certainly not Rust, but a lot closer to Rust than to C. But the standard library, due to a mixture of politics and a strong desire for backwards compatibility with existing codebases, tends to opt for ergonomics over security.

          1. -1

            I think you hit the nail on the head, C++ is ergonomic. I guess I don’t like the idea that Rust would get in the way of me expressing my ideas (even if they are bad). Something about that is offensive to me. But of course, that isn’t a rational argument.

            Golang, on one hand, is like speaking like a 3-year-old, and Rust is peaking the language in 1984. C++, on the other hand, is kind of poetic. I think that people forget software can be art and self-expression, just as much as it can be functional.

            1. 10

              I guess I don’t like the idea that Rust would get in the way of me expressing my ideas (even if they are bad). Something about that is offensive to me.

              Isn’t it more offensive to tell users that you are putting them at greater risk of security vulnerabilities because you don’t like to be prevented from expressing yourself?

              1. 3

                That’s an original take on it.

              2. 6

                It doesn’t get in the way of expressing your ideas. It gets in the way of you expressing them in a way where it can’t prove they’re safe. A way where the ideas might not actually work in production. That’s a distinction I think is worthwhile.

                1. 5

                  I think we agree strongly that Rust constrains what you can say. Where we have different tastes is that I like that. To me, it’s the kind of constraint that sparks artistic creativity, and by reasoning about my system so that I can essentially prove that its memory access patterns are safe, I think I get a better result.

                  But I understand how a different programmer, or in different circumstances, would value the freedom to write pretty much any code they like.

                  1. 3

                    I don’t like the idea that Rust would get in the way of me expressing my ideas

                    Every language allows you to express yourself in a different way; a Javascript programmer might say the same of C++. There is poetry in the breadth of concepts expressible (and inexpressible!) in every language.

                    I started out with Rust by adding .clone() to everything that made it complain about borrowing, artfully(?) skirting around the parts that seem to annoy everyone else until I was ready. Sure, it might have made it run a bit slower, but I knew my first few (er, several) programs would be trash anyway while I got to grips with the language. I recommend it if you’re curious but reticent about trying it out.

                    – The Rust Evangelion Strike Force

                    1. 3

                      That is true, you have to do things “Rust way” rather than your way. People do react with offense to “no, you can’t just modify that!”

                      However, I found Rust gave me a vocabulary and building blocks for common patterns, which in C I’d “freestyle” instead. Overall this seems more robust and readable, because other Rust users instantly recognize what I’m trying to do, instead of second-guessing ownership of pointers, thread-safety, and meaning of magic booleans I’d use to fudge edge cases.

                  2. 5

                    Tarsnap is written in C. I think it’s ultra unfortunate that C has gotten a bad rap due to the undisciplined people who use it.

                    C and C++ are tools to create abstractions. They leave many ways to burn yourself. But they also represent closely how machines actually work. (This is more true of C than C++, but C is a subset of C++, so the power is still there.)

                    This is an important quality often lost in “better” programming languages. It’s why most software is so slow, even when we have more computing power than our ancestors could ever dream of.

                    I fucking love C and C++, and I’m saddened to see it become a target for hatred. People have even started saying that if you actively choose C or C++, you are an irresponsible programmer. Try writing a native node module in a language other than C++ and see how far you get.

                    1. 25

                      Tarsnap is written in C. I think it’s ultra unfortunate that C has gotten a bad rap due to the undisciplined people who use it.

                      I think the hate for C and C++ is misplaced; I agree. But I also really dislike phrasing the issue the way you have, because it strongly implies that bugs in C code are purely due to undisciplined programmers.

                      The thing is, C hasn’t gotten a bad rap because undisciplined people use it. It’s gotten a bad rap because disciplined people who use it still fuck up—a lot!

                      Is it possible to write safe C? Sure! The techniques involved are a bit arcane, and probably not applicable to general programming, but sure. For example, dsvpn never calls malloc. That’s definitely a lot safer than normal C.

                      But that’s not the default, and not doing it that way doesn’t make you undisciplined. A normal C program is gonna have to call malloc or mmap at some point. A normal C program is gonna have to pass around pointers with at least some generic/needs-casting members at some point. And as soon as you get into those areas, C, both the language and the ecosystem, make you one misplaced thought away from a vulnerability.

                      This is an important quality often lost in “better” programming languages. It’s why most software is so slow, even when we have more computing power than our ancestors could ever dream of.

                      You’re flirting around a legitimate issue here, which is that some languages that are safer (e.g. Python, Java, Go) are arguably intrinsically slower because they have garbage collection/force a higher level of abstraction away from the hardware. But languages like Zig, Rust, and (to be honest) bygones like Turbo Pascal and Ada prove that you don’t need to be slower to be safer, either in compilation or runtime. You need stricter guarantees than C offers, but you don’t need to slow down the developer in any other capacity.

                      No, people shouldn’t hate on C and C++. But I also don’t think they’re wrong to try very hard to avoid C and C++ if they can. I think you are correct that a problem until comparatively recently has been that giving up C and C++, in practice, meant going to C#, Java, or something else that was much higher on the abstraction scale than you needed if your goal were merely to be a safer C. But I also think that there are enough new technologies either already here or around the corner that it’s worth admitting where C genuinely is weak, and looking to those technologies for help.

                      1. 2

                        You need stricter guarantees than C offers, but you don’t need to slow down the developer in any other capacity.

                        Proven in a couple of studies with this one (pdf) being the best. I’d love to see a new one using Rust or D.

                        1. 1

                          I also really dislike phrasing the issue the way you have, because it strongly implies that bugs in C code are purely due to undisciplined programmers.

                          You dislike the truth, then. If you don’t know how to free memory when you’re done with it and then not touch that freed memory, you should not be shipping C++ to production.

                          You namedrop Rust. Note that you can’t borrow subsets of arrays. Will you admit that safety comes at a cost? That bounds checking is a cost, and you won’t ever achieve the performance you otherwise could have, if you have these checks?

                          Note that Rust’s compiler is so slow that it’s become a trope. Any mention of this will piss off the Rust Task Force into coming out of the woodwork with how they’ve been doing work on their compiler and “Just wait, you’ll see!” Yet it’s slow. And if you’re disciplined with your C++, then instead of spending a year learning Rust, you may as well just write your program in C++. It worked for Bitcoin.

                          It worked for Tarsnap, Emacs, Chrome, Windows, and a litany of software programs that have come before us.

                          I also call to your attention the fact that real world hacks rarely occur thanks to corrupted memory. The most common vector (by far!) to breach your corporation is via spearphishing your email. If you analyze the amount of times that a memory corruption actually matters and actually causes real-world disasters, you’ll be forced to conclude that a crash just isn’t that significant.

                          Most people shy away from these ideas because it offends them. It offended you, by me saying “Most programmers suck.” But you know what? It’s true.

                          I’ll leave off with an essay on the benefits of fast software.

                          1. 12

                            It worked for […] Chrome

                            It’s difficult for me to reconcile this with the perspective of a senior member of the Chrome security team: https://twitter.com/fugueish/status/1154447963051028481

                            Chrome, Chrome OS, Linux, Android — same problem, same scale.

                            Here’s some of the Fish in a Barrel analysis of Chrome security advisories:

                            1. 1

                              This implies an alternative could have been used successfully.

                              Even today, would anyone dare write a browser in anything but C++? Even Rust is a gamble, because it implies you can recruit a team sufficiently proficient in Rust.

                              Admittedly, Rust is a solid alternative now. But most companies won’t make the switch for a long time.

                              Fun exercise: Criticize Swift for being written in C++. Also V8.

                              C++ is still the de facto for interoperability, too. If you want to write a library most software can use, you write it in C or C++.

                              1. 13

                                C++ is still the de facto for interoperability, too.

                                C is the de facto for interoperability. C++ is about as bad as Rust, and for the same reason: you can’t use generic types without compiling a specialized version of the template-d code.

                                1. 8

                                  You’re shifting the goalposts here. “No practical alternative to C++” is altogether unrelated to “C and C++ are perfectly safe in disciplined programmers’ hands” which you claimed above.

                                  And no, empirically they are not safe, a few outlier examples notwithstanding (and other examples like Chrome and Windows don’t really support your claim). It’s also illogical to suggest that just because there are a handful of developers in the world who managed to avoid all the safety issues in their code (maybe), C and C++ are perfectly fine for wide industry use by all programmers, who, in your own view, aren’t disciplined enough. Can’t you see that it doesn’t follow? I can never understand why people keep making this claim.

                                  1. -1

                                    Also known as “having a conversation.”

                                    But, sure, let’s return to the original claim:

                                    C and C++ are perfectly safe in disciplined programmers’ hands

                                    Yes, I claim this with no hubris, as someone who has been writing C++ off and on for well over a decade.

                                    I’m prepared to defend that claim with my upcoming project, SweetieKit (NodeJS for iOS). I think overall it’s quite safe, and that if you manage to crash while using it, it’s because you’ve used the Objective-C API in a way that would normally crash. For example, pick apart the ARKit bindings: https://github.com/sweetiebird/sweetiekit/tree/cb881345644c2f1b2ac1a51032ec386d1ddb7ced/node-ios-hello/NARKit

                                    I don’t think SweetieKit could have been made in any other language, partly because binding to V8 is difficult from any other language.

                                    I do not claim at the present time that there are no bugs in SweetieKit (nor will I ever). But I do claim that I know where most of them probably are, and that there are few unexpected behaviors.

                                    Experience matters. Discipline matters. Following patterns, matters. Complaining that C++ is inherently unsafe is like claiming that MMA fighters will inherently lose: the claim makes no sense, first of all, and it’s not true. You follow patterns while fighting. And you follow patterns while programming. Technique matters!

                                    1. 5

                                      Perhaps you are one of the few sufficiently disciplined programmers! But I really can’t agree with your last paragraph when, for example, Microsoft says this:

                                      the root cause of approximately 70% of security vulnerabilities that Microsoft fixes and assigns a CVE (Common Vulnerabilities and Exposures) are due to memory safety issues. This is despite mitigations including intense code review, training, static analysis, and more.

                                      I think you have a point about the impact of these types of issues compared to social engineering and other attack vectors, but I’m not quite sure that it’s sufficient justification if there are practical alternatives which mostly remove this class of vulnerabilities.

                                      1. 1

                                        For what it’s worth, I agree with you.

                                        But only because programmers in large groups can’t be trusted to write C++ safely in an environment where safety matters. The game industry is still mostly C++ powered.

                                        1. 3

                                          I agree with that. I’ll add that games:

                                          (a) Have lots of software problems that even the players find and use in-game.

                                          (b) Sometimes have memory-related exploits that have been used to attack the platforms.

                                          (c) Dodge lots of issues languages such as Rust address with the fact that you can use memory pools for a lot of stuff. I’ll also add that’s supported by Ada, too.

                                          Preventable, buggy behavior in games developed by big companies continues to annoy me. That’s a sampling bias that works in my favor. If what I’ve read is correct, they’re better and harder-working programmers than average in C++ but still have these problems alternatives are immune to.

                                          1. 3

                                            That, and games encourage an environment of ignoring security or reliability in favour of getting the product out the door, and then no long-term maintenance. If it weren’t for the consoles, they wouldn’t even have security on their radar.

                                            1. 1

                                              Yeah. On top of it, the consoles showed how quickly the private market could’ve cranked out hardware-level security for our PC’s and servers… if they cared. Also, what the lower end of the per-unit price might be.

                                  2. 6

                                    Would anyone dare write a browser in anything but C++?

                                    That’s preeeetty much the whole reason Mozilla made Rust. It now powers a decent chunk of Firefox, esp the performance-sensitive parts like, say, the rendering engine.

                                2. 5

                                  If you don’t know how to free memory when you’re done with it and then not touch that freed memory, you should not be shipping C++ to production.

                                  Did you ever botch up memory management? If you say “no” I am going to assume you haven’t ever used C nor C++.

                                  1. 5

                                    There’s a difference between learning and shipping to production.

                                    Personally, I definitely can’t get memory management right by myself and I’m pretty suspicious of people who claim they can, but people can and do write C++ that only has a few bugs.

                                    1. 5

                                      There’s always one or other edge case that make it slip into prod even with the experts. A toolchain on a legacy project that has no santizer flags you are used to. An integrated third party library with ambiguous lifecycle description. A tired dev on the end of a long stint. Etc etc.

                                      Tooling helps, but anytime you have an allocation bug caught in that safety net means you screwed up on your part.

                                      1. 5

                                        Amen. I thought I was pretty good a while back having maintained a desktop app for years (C++/MFC (yeah, I know)). Then I got on a team that handles a large enterprise product - not exclusively C++ but quite a bit. There are a couple of C++ guys on the team that absolutely run circles around me. Extremely good. Probably 10x good. It was (and has been) a learning experience. However, every once in a while we will still encounter a memory issue. It turns out that nobody’s perfect, manual memory management is hard (but not impossible), and sometimes things slip through. Tooling helps tremendously - static analyzers, smarter compilers, and better language features are great if you have access to them.

                                        I remember reading an interview somewhere in which Bjarne Stroustrup was asked where felt he was on a scale of 1-10 as a C++ programmer. His response, iirc, was that he was a solid 7. This from the guy who wrote the language (granted, standardization has long since taken over.) His answer was in reference to the language as a whole rather than memory management in particular, but I think it says an awful lot about both.

                                        1. 4

                                          My first job included a 6 month stint tracking down a single race condition in a distributed database. Taught me quite a bit about just how hard it is to get memory safety right.

                                          1. 2

                                            You’re probably the kind of person that might be open-minded to the idea that investing some upfront work into TLA+ might save time later. Might have saved you six months.

                                            1. 2

                                              The employer might not have needed me in 2007 if the original author had used TLA+ (in 1995, when they first built it).

                                              1. 2

                                                Yeah, that could’ve happened. That’s why I said you. As in, we’re better off if we learn the secret weapons ourselves, go to employers who don’t have them, and show off delivering better results. Then, leverage that to level up in career. Optionally, teach them how we did it. Depends on the employer and how they’ll react to it.

                                                1. 2

                                                  This particular codebase was ~600k lines of delphi, ~100k of which was shared between 6 threads (each with their own purpose). 100% manually synchronized (or not) with no abstraction more powerful than mutexes and network sockets.

                                                  It took years to converge on ‘only crashes occasionally’, and has never been able to run on a hyperthreaded CPU.

                                                  1. 1

                                                    Although Delphi is nice, it has no tooling for this that I’m aware of. Finding the bugs might have to be a manual job. Whereas, Java has a whole sub-field dedicated to producing tools to detect this. They look for interleavings, often running things in different orders to see what happens.

                                                    “~100k of which was shared between 6 threads (each with their own purpose).”

                                                    That particularly is the kind of thing that might have gotten me attempting to write my own race detector or translator to save time. It wouldn’t surprise me if the next set of problems took similarly long to deal with.

                                      2. 1

                                        Oh yes. That’s how you become an expert.

                                        You quickly learn to stick to patterns, and not deviate one millimeter from those patterns. And then your software works.

                                        I vividly remember when I became disillusioned with shared_ptr: I put my faith into magic to solve my problems rather than understanding deeply what the program was doing. And our performance analysis showed that >10% of the runtime was being spent solely incrementing and decrementing shared pointers. That was 10% we’d never get back, in a game engine where performance can make or break the company.

                                        1. 2

                                          Ok, I take it you withheld shipping code into prod until you reached that level of expertise? I’m almost there after 20+ years and feel like a cheat now ;)

                                      3. 4

                                        It’s funny you mention slow compiles given your alternative is C++: the language that had the most people complaining about compile times before Rust.

                                        Far as other comment, the C++ alternative for a browser should be fairly stable. Rust and Ada are safer. D compiles faster for quicker iterations. All can turn off the safety features or overheads on a selective basis where needed. So, yeah, I’d consider starting a browser without C++.

                                        The other problem with C++ for security is that it’s really hard to analyze with few tools compared to just C. There still isn’t even a formal semantics for it because the language itself is ridiculously complicated. Unnecessarily so given more powerful languages, PreScheme and Scheme48, had a verified implementations. It’s just bad design far as security is concerned.

                                    2. 6

                                      Comparing something as large as an OS to a project like Tarsnap seems awfully simplistic. C has a bad rap because of undisciplined developers, sure, but also because manual memory management can be hard. The more substantial the code base, the more difficult it can get.

                                      1. 6

                                        Tarsnap is written in C

                                        I want a rule in any conversation about C or C++ that nobody defending what can be done in those languages by most people uses an example from brilliant, security-focused folks such as Percival or DJB. Had I lacked empirical data, I’d not know whether it was just their brilliance or the language contributing to the results they get. Most programmers, even smart ones, won’t achieve what they achieved in terms of secure coding if given the same amount of time and similar incentives. Most won’t anyway given the incentives behind most commercial and FOSS software that work against security.

                                        Long store short, what those people do doesn’t tell us anything about C/C++ because they’re the kind of people that might get results with assembly, Fortran, INTERCAL, or whatever. It’s a sampling bias that shows an upper bound rather than what to expect in general.

                                        1. 8

                                          Right. As soon as you restrict the domain to software written by teams, over a period of time, then it’s game over. Statistically you’re going to get a steady stream of CVE’s, and you can do things to control the rate (like using sanitizers) but qualitatively there’s really nothing you can do about it.

                                        2. 4

                                          My frustration to C is that it makes lots of things difficult and dangerous that really don’t need to be. Ignoring Rust as a comparison, there’s still lots of things that could be improved quite easily.

                                          1. 4

                                            That’s pretty much Zig. C with the low-hanging fruit picked.

                                            1. 0

                                              Now we’re talking! What kind of things could be improved easily?

                                              I like conversations like this because it highlights areas where C’s designers could have come up with something just a bit better.

                                              1. 9

                                                That’s easy. Add slices.

                                                1. 5

                                                  A significant amount of the undefined behavior in C and C++ is from integer operations. For example, int x = -1; int y = x << 1; is UB. (I bet if you did a poll of C and C++ programmers, a majority would say y == -2). There have been proposals (Regehr’s Friendly C, some more recent stuff in WG21) but so far they haven’t gotten much traction.

                                                  1. 5

                                                    I tweeted this as a poll. As of the time I posted the answer, 42% said -2, 16% correctly said it was UB, another 16% said implementation defined, and 26% picked “different in C and C++.” Digging a little further, I’m happy to see this is fixed in the C++20 draft, which has it as -2.

                                                    1. 1

                                                      Agreed; int operations are one area I find hard to defend. The best I could come up with is that int64_t should have been the default datatype. This wouldn’t solve all the problems, but it would greatly reduce the surface.

                                                2. 4

                                                  I wonder about how well C maps to machine semantics. Consider some examples; for each, how does C expose the underlying machine’s abilities? How would we do this in portable C? I would humbly suggest that C simply doesn’t include these. Which CPU are you thinking of when you make your claim?

                                                  • x86 supports several extensions for SIMD logic, including SIMD registers. This grants speed; performance-focused applications have been including pages of dedicated x86 assembly and intrinsics for decades.
                                                  • amd64 supports “no-execute” permissions per-page. This is a powerful security feature that helps nullify C’s inherent weaknesses.
                                                  • Modern ARM support embedded “thumb” ISAs which trade functionality for size improvements. This is an essential feature of ARM which has left fingerprints on video game consoles, phones, and other space-constrained devices.

                                                  Why is software slow? This is a sincere and deep question, and it’s not just about the choice of language. For example, we can write an unacceptably-slow algorithm in any (Turing-equivalent) language, so speed isn’t inherently about choice of language.

                                                  I remember how I learned to hate C; I wrote a GPU driver. When I see statements like yours, highly tribal, of forms like, “try writing [a native-code object with C linkage and libc interoperability] in a language other than C[++],” I wonder why you’ve given so much of your identity over to a programming language. I understand your pragmatic points about speed, legacy, interoperability, and yet I worry that you don’t understand our pragmatic point about memory safety.

                                                  1. 4

                                                    It was designed specifically for the advantages and limitations of the PDP-11 on top of authors’ personal preferences. It’s been a long time since there was a PDP-11. So, the abstract machine doesn’t map to current hardware. Here’s a presentation on its history that describes how many of the “design” decisions came to be. It borrowed a lot from Richard’s BCPL which wasn’t designed at all: just what compiled on even worse hardware.

                                                    1. 1

                                                      I keep hearing this trope, but coming from the world of EE, I’m readily convinced it is false. C never was designed to give full access to the hardware.

                                                      1. 1

                                                        The K&R book repeatedly describes it as using data types and low-level ops that reflect the computer capabilities of the time. Close to the machine. Then, it allows full control of memory with pointers. Then, it has an asm keyword to directly program in assembly language. It also was first used to write a program, UNIX, that had full access to and manipulated hardware.

                                                        So, it looks like that claim is false. It was designed to do two things:

                                                        1. Provide an abstract machine close to hardware to increase (over assembly) productivity, maintain efficiency, and keep compiler easy to implement.

                                                        2. Where needed, provide full control over hardware with a mix of language and runtime features.

                                                        1. 1

                                                          Yet even the PDP-11 had a cache. C might have been low enough to pop in to assembly or write to an arbitrary memory position, but that does not mean it ever truly controlled the processor.

                                                          1. 1

                                                            That would be an exception to the model if C programmers routinely control the cache. It wouldn’t be if the cache was an accelerator that works transparently in the background following what program is doing with RAM. Which is what I thought it did.

                                                            Regardless, C allows assembly. If instructions are availsble, it can control the cache with wrapped functions.

                                                    2. 1

                                                      In my experience, C is very close to how the processor. Pretty much every C “phoneme” maps to one or two instructions, making it very close to how your processor actually works. The assembly is a bit more expressive, especially when it comes to bit operations and weird floating point stuff (and loads of weird, speciallized stuff), but C can only use features it can expect any reasonable ISA to have. It usually is easily extensible to accomodate the more specific things, far easier than most other languages.

                                                      About your three examples:

                                                      1. Adding proper support for SIMD is difficult, because it is very different between architectures or between versions of an architecture. The problem of designing (in a perfomant way, because if someone is vectorizing by hand, perfomance is important) around these differences is hard enough that I haven’t seen a good implementation. GCC has an extension that tries, but it is a bit of a PITA to use (https://gcc.gnu.org/onlinedocs/gcc/Vector-Extensions.html#Vector-Extensions ). There are relatively easy to use machine specific extensions out there that fit well into the language.

                                                      2. If you malloc anything, you’ll get memory in a non-executable page from any sane allocator. If you want memory with execute permissions, you’ll have to mmap() yourself.

                                                      3. Thumb doesn’t really change the semantics of the logical processor, it just changes the instruction encoding. This is almost completly irrelevant for C.

                                                      You can of course argue that most modern ISAs are oriented around C (I’m looking at you, byte addressability) and not the other way around, but that is a debate for another day.

                                                      1. 2

                                                        “Adding proper support for SIMD is difficult, because it is very different between architectures or between versions of an architecture.”

                                                        There’s been parallel languages that can express that and more for years. C just isn’t up to the task. Chapel is my current favorite given all the deployments it supports. IIRC, Cilk language was a C-like one for data-parallel apps.

                                                        1. 1

                                                          Cilk is a C extension. Also, it is based upon multithreading, not SIMD.

                                                          1. 1

                                                            Oh yeah. Multithreading. Still an example of extending the language for parallelism. One I found last night for SIMD in C++ was here.

                                                    3. 3

                                                      I see your point regarding people sh*tting all over C/C++. These are clearly good languages and they definitely have their place. However, I work with C++ pretty frequently (not low-level OS stuff, strictly backend and application stuff on Windows) and I’ve encountered a couple of instances in which people way more capable than I am managed to shoot themselves in the foot. That changed my perspective.

                                                      To be clear, I also love C (mmmm…less C++), and I think that most developers would do well to at least pick up the language and be able to navigate (and possibly patch a large C codebase.) However, I’d also wager that an awful lot of stuff that is written in C/C++ today, probably doesn’t need the low level access that these languages provide. I think this its particularly true now that languages like Rust and go are proving to be both very capable at the same kinds of problems and also substantially little less foot-gunny.

                                                    4. 2

                                                      This is a bit tangential, but your link for “Modern C++ Won’t Save Us” points to the incorrect page.

                                                      It should point to: https://alexgaynor.net/2019/apr/21/modern-c++-wont-save-us/

                                                    5. 2

                                                      [edit: I was wrong about Windows; esak below pointed out that my gut feeling was incorrect, and Windows and macOS are actually right about on par. Carry on!]

                                                      I kind of like this article, but I have to ask a follow-up: despite its very wide attack surface, and despite being written in C or (occasionally) a very specific antiquated dialect of C++, Windows actually seems to have far fewer memory-safety-related issues than does macOS. What’s the cause? Or is that just how it looks from here as I’m doing a very quick trawl through the last year’s Windows bugs, but wouldn’t be accurate if I sat down and counted more carefully?

                                                      (NB, I’m narrowly focused on memory-related CVEs here, not doing a count of security issues.)

                                                      1. 8

                                                        Hrm. The article about Microsoft exploring the use of Rust definitely made some waves the other day due to the author mentioning the substantial number of critical defects in Windows that can be attributed to memory issues. The article is based on a presentation by MS Security Response here which makes it pretty clear (slide 10) that memory safety related issues remain a significant source of concern within the Windows environment. I don’t know how this compares to OSX, but it seems clear to me that regardless of the OS, if an unsafe language is involved, then memory management is an issue.

                                                        (edit: Windows critical vulnerabilities related to memory issues to-date: 70% (see slide 10 in link.) Seems more or less on par with OSX.)

                                                        1. 2

                                                          Yeah, you’re right. So it was just my gut, which was wrong. I’m going to edit the GP just to acknowledge that you’re correct and why I was wrong. Thanks for taking a more detailed look.

                                                        2. 7

                                                          “Windows actually seems to have far fewer memory-safety-related issues than does macOS. What’s the cause? “

                                                          It happened when Microsoft introduced the SDL implemented by Steve Lipner who worked in high-assurance security. He modeled it after the lifecycle approach used in the TCSEC that they applied in building a secure VMM (pdf). In a blog write-up, he said he made it lighter and with main goal of shipping features quickly to increase adoption by Microsoft teams who would prioritize shipping over security. He got results. Eventually, hackers started targeting 3rd-party applications more than Windows kernel since it had too little low-hanging fruit for them.

                                                          On the side, Microsoft Research was creating tools to automatically find problems. Microsoft didn’t use most of them. However, the SLAM toolkit for verifying drivers probably single-handedly knocked out almost all the blue screens. So, there’s that on the driver side.

                                                        3. 0

                                                          These kind of critiques are truly pathetic and make all the classical obvious errors in CS arguments. First of all, operating systems programming contains many operations that are not suitable for the “memory safety” techniques used in languages like Swift or similar (Php for example). When you are programming a DMA I/O device or copying data from kernel to user space or vice-versa, or parsing the bytes of an IP packet or executing a system call or changing the memory map or …. you are well outside the “safe space”. Even if you are using Lua to interface to a device from user space (and people have done very interesting work with that), memory safety has to depend on careful programming not some magical enforcement mechanism. Then note that nobody has made an effort to figure out how many of these memory faults happen in parts of the OS that would need to be run in the “unsafe” mode of one of the super-brilliant better languages. That would actually be interesting, but would involve a lot of work. Then there is no attention to what language would be used to produce the run-time of your e.g. garbage collected alternative or how it would deal with time critical operations ( lock semaphore; start to do critical code !oops trigger garbage collector and hope that it doesn’t need the same lock) . Then there is absolutely no effort to determine how many of these terrible errors would be avoided if the developers used static and dynamic analysis tools (like any responsible C developer would in this century) and how many are caused by the reckless UB nonsense in GCC/Clang or how many are in 20 year old code that was developed before we had static analysis tools or …

                                                          At least the Zig designers are thinking about what they are doing rather than sneering about things they don’t understand.

                                                          1. 1

                                                            I don’t think you actually looked for the things you named off. For example, typing type-safe DMA into DDG gave me this. Microsoft and FLINT also had typed/type-safe assembly.

                                                            EDIT: “Then there is absolutely no effort to determine how many of these terrible errors would be avoided if the developers used static and dynamic analysis tools”

                                                            I agree with this. I’d like to see comparisons include that along with the productivity. Fighting borrow-checker vs analyzers/testers for temporal errors as example.