1. 10

    The traditional difference between RISC and CISC is memory operands for every instruction instead of just load and stores (as you will notic3, x86 doesn’t have a load instruction, it uses the move-register instruction with a memory operand for that). Another defining feature of CISC architectures is the extensive use of microcode for complicated instructions , such as x86 sine and cosine instructions (FJCVTZS is not a complicated instruction, what it does is quite simple though it does look like a cat walked over the keyboard). Use of instructions encoded in microcode is strongly discouraged, because it is usually slower than the code written by hand. I don’t see how the rest of the article (cache access times etc.) is in any way relevant to the discussion how well RISC scales.

    1. 2

      What’s the point of having complex instructions (like sin, cos) encoded in microcode, if it’s slower?

      I always thought the only reason many of the heavyweight x86 instructions was because they were faster than the fastest -hand-optimized assembler code doing the same thing. It certainly is true of the AES instruction set.

      1. 4

        What’s the point of having complex instructions (like sin, cos) encoded in microcode, if it’s slower?

        Because it was once faster.

        1. 3

          The only one of the AES instructions encoded in microcode is AESKEYGENASSIST, all the other ones aren’t (split into 2 uops at most) on Broadwell processors. On AMD Zens, all of them are in hardware.

          1. 1

            What’s the point of having complex instructions (like sin, cos) encoded in microcode, if it’s slower?

            Otherwise they need to be in hardware, taking up silicon.

            1. 1

              Sorry, I meant: why have these additional instructions at all, if they’re slower?

              1. 3

                Otherwise they need to be in hardware, taking up silicon.

                Because they are part of the ISA, and it need to remain backwards compatible. They didn’t add the instructions for SSE and AVX, though

                1. 1

                  It’s easy and safe to break backwards compatibility for x86 extensions, because every program is supposed to check whether the CPU it’s running on supports the instruction before executing it. the CPUID instructions tells you which ISA extensions are supported.

                  For example, if you runCPUID, it sets the 30th bit of ECX to indicate RdRand support and the 0th bit of EDX to show x87 support. The floating point trigonometric functions are part of x87. Intel and AMD could easily drop support for x87, by setting the CPUID flag for x87 to 0. The only programs that would break would be badly-behaving programs that don’t check before using x86 extensions’ instructions.

                  1. 7

                    The only programs that would break would be badly-behaving programs

                    So, pretty much all of them? Programs don’t check for features that 99% of their userbase has. For example, on macOS you unconditionally use everything up to SSE3.1, because there are no usable Macs without them.

                    When programs stop working, or become much slower due to traps and emulation, users aren’t any happier knowing whose fault was it.

                    1. 1

                      Are you a LLVM developer?

                      1. 1

                        No.

            2. 2

              Use of instructions encoded in microcode is strongly discouraged, because it is usually slower than the code written by hand.

              I don’t think this is correct. Microcode is a very limited resource on the CPU, so why would anyone waste space encoding instructions that could be implemented elsewhere if they were slower?

              1. 4

                Because the instruction set has to be backward compatible. They decided that floating point sine and cosine where a great idea when introducing the x87, but they soon found out it wasn’t. There’s a suspicious lack of these instructions in SSE and AVX. Still, these instructions waste a lot of space in the microcode ROM

            1. 3

              It’s unfortunate that most of these posts clump C with C++. Yes, it does reference Modern C++ Won’t Save Us. The question I would love answered is, does modern C++ solve 80% of the problems? Because 80% is probably good enough IMO if solving for 99% distracts us from other important problems.

              1. 11

                The question I would love answered is, does modern C++ solve 80% of the problems?

                The reason this isn’t really answered is because the answer is a very unsatisfying, “yes, kinda, sometimes, with caveats.”

                The issues highlighted Modern C++ Won’t Save Us are not straw men; they are real. The std::span issue is one I’ve actually hit, and the issues highlighted with std::optional are likewise very believable. They can and will bite you.

                On the other hand, there is nothing keeping you from defining a drop-in replacement for e.g. std::optional that simply doesn’t define operator* and operator->, which would suddenly and magically not be prone to those issues. As Modern C++ Won’t Save Us itself notes, Mozilla has done something along these lines with std::span, too, preventing it from the use-after-free issue that the official standard allows. These structures behave the way they do because they’re trying to be drop-in replacements for bare pointers in the 90% case, but they’re doing it at the cost of safety. If you’re doing a greenfield C++ project, you can instead opt for safe variants that aren’t drop-in replacements, but that avoid use-after-free, buffer overruns, and the like. But those are, again, not the versions specified to live in std::.

                And that’s why the answer is so unsatisfying: with std::move, rvalue references, unique_ptr, and so on give you the foundation for C++ to be…well, certainly not Rust, but a lot closer to Rust than to C. But the standard library, due to a mixture of politics and a strong desire for backwards compatibility with existing codebases, tends to opt for ergonomics over security.

                1. -1

                  I think you hit the nail on the head, C++ is ergonomic. I guess I don’t like the idea that Rust would get in the way of me expressing my ideas (even if they are bad). Something about that is offensive to me. But of course, that isn’t a rational argument.

                  Golang, on one hand, is like speaking like a 3-year-old, and Rust is peaking the language in 1984. C++, on the other hand, is kind of poetic. I think that people forget software can be art and self-expression, just as much as it can be functional.

                  1. 10

                    I guess I don’t like the idea that Rust would get in the way of me expressing my ideas (even if they are bad). Something about that is offensive to me.

                    Isn’t it more offensive to tell users that you are putting them at greater risk of security vulnerabilities because you don’t like to be prevented from expressing yourself?

                    1. 3

                      That’s an original take on it.

                    2. 6

                      It doesn’t get in the way of expressing your ideas. It gets in the way of you expressing them in a way where it can’t prove they’re safe. A way where the ideas might not actually work in production. That’s a distinction I think is worthwhile.

                      1. 5

                        I think we agree strongly that Rust constrains what you can say. Where we have different tastes is that I like that. To me, it’s the kind of constraint that sparks artistic creativity, and by reasoning about my system so that I can essentially prove that its memory access patterns are safe, I think I get a better result.

                        But I understand how a different programmer, or in different circumstances, would value the freedom to write pretty much any code they like.

                        1. 3

                          I don’t like the idea that Rust would get in the way of me expressing my ideas

                          Every language allows you to express yourself in a different way; a Javascript programmer might say the same of C++. There is poetry in the breadth of concepts expressible (and inexpressible!) in every language.

                          I started out with Rust by adding .clone() to everything that made it complain about borrowing, artfully(?) skirting around the parts that seem to annoy everyone else until I was ready. Sure, it might have made it run a bit slower, but I knew my first few (er, several) programs would be trash anyway while I got to grips with the language. I recommend it if you’re curious but reticent about trying it out.

                          – The Rust Evangelion Strike Force

                          1. 3

                            That is true, you have to do things “Rust way” rather than your way. People do react with offense to “no, you can’t just modify that!”

                            However, I found Rust gave me a vocabulary and building blocks for common patterns, which in C I’d “freestyle” instead. Overall this seems more robust and readable, because other Rust users instantly recognize what I’m trying to do, instead of second-guessing ownership of pointers, thread-safety, and meaning of magic booleans I’d use to fudge edge cases.

                        2. 5

                          Tarsnap is written in C. I think it’s ultra unfortunate that C has gotten a bad rap due to the undisciplined people who use it.

                          C and C++ are tools to create abstractions. They leave many ways to burn yourself. But they also represent closely how machines actually work. (This is more true of C than C++, but C is a subset of C++, so the power is still there.)

                          This is an important quality often lost in “better” programming languages. It’s why most software is so slow, even when we have more computing power than our ancestors could ever dream of.

                          I fucking love C and C++, and I’m saddened to see it become a target for hatred. People have even started saying that if you actively choose C or C++, you are an irresponsible programmer. Try writing a native node module in a language other than C++ and see how far you get.

                          1. 25

                            Tarsnap is written in C. I think it’s ultra unfortunate that C has gotten a bad rap due to the undisciplined people who use it.

                            I think the hate for C and C++ is misplaced; I agree. But I also really dislike phrasing the issue the way you have, because it strongly implies that bugs in C code are purely due to undisciplined programmers.

                            The thing is, C hasn’t gotten a bad rap because undisciplined people use it. It’s gotten a bad rap because disciplined people who use it still fuck up—a lot!

                            Is it possible to write safe C? Sure! The techniques involved are a bit arcane, and probably not applicable to general programming, but sure. For example, dsvpn never calls malloc. That’s definitely a lot safer than normal C.

                            But that’s not the default, and not doing it that way doesn’t make you undisciplined. A normal C program is gonna have to call malloc or mmap at some point. A normal C program is gonna have to pass around pointers with at least some generic/needs-casting members at some point. And as soon as you get into those areas, C, both the language and the ecosystem, make you one misplaced thought away from a vulnerability.

                            This is an important quality often lost in “better” programming languages. It’s why most software is so slow, even when we have more computing power than our ancestors could ever dream of.

                            You’re flirting around a legitimate issue here, which is that some languages that are safer (e.g. Python, Java, Go) are arguably intrinsically slower because they have garbage collection/force a higher level of abstraction away from the hardware. But languages like Zig, Rust, and (to be honest) bygones like Turbo Pascal and Ada prove that you don’t need to be slower to be safer, either in compilation or runtime. You need stricter guarantees than C offers, but you don’t need to slow down the developer in any other capacity.

                            No, people shouldn’t hate on C and C++. But I also don’t think they’re wrong to try very hard to avoid C and C++ if they can. I think you are correct that a problem until comparatively recently has been that giving up C and C++, in practice, meant going to C#, Java, or something else that was much higher on the abstraction scale than you needed if your goal were merely to be a safer C. But I also think that there are enough new technologies either already here or around the corner that it’s worth admitting where C genuinely is weak, and looking to those technologies for help.

                            1. 2

                              You need stricter guarantees than C offers, but you don’t need to slow down the developer in any other capacity.

                              Proven in a couple of studies with this one (pdf) being the best. I’d love to see a new one using Rust or D.

                              1. 1

                                I also really dislike phrasing the issue the way you have, because it strongly implies that bugs in C code are purely due to undisciplined programmers.

                                You dislike the truth, then. If you don’t know how to free memory when you’re done with it and then not touch that freed memory, you should not be shipping C++ to production.

                                You namedrop Rust. Note that you can’t borrow subsets of arrays. Will you admit that safety comes at a cost? That bounds checking is a cost, and you won’t ever achieve the performance you otherwise could have, if you have these checks?

                                Note that Rust’s compiler is so slow that it’s become a trope. Any mention of this will piss off the Rust Task Force into coming out of the woodwork with how they’ve been doing work on their compiler and “Just wait, you’ll see!” Yet it’s slow. And if you’re disciplined with your C++, then instead of spending a year learning Rust, you may as well just write your program in C++. It worked for Bitcoin.

                                It worked for Tarsnap, Emacs, Chrome, Windows, and a litany of software programs that have come before us.

                                I also call to your attention the fact that real world hacks rarely occur thanks to corrupted memory. The most common vector (by far!) to breach your corporation is via spearphishing your email. If you analyze the amount of times that a memory corruption actually matters and actually causes real-world disasters, you’ll be forced to conclude that a crash just isn’t that significant.

                                Most people shy away from these ideas because it offends them. It offended you, by me saying “Most programmers suck.” But you know what? It’s true.

                                I’ll leave off with an essay on the benefits of fast software.

                                1. 12

                                  It worked for […] Chrome

                                  It’s difficult for me to reconcile this with the perspective of a senior member of the Chrome security team: https://twitter.com/fugueish/status/1154447963051028481

                                  Chrome, Chrome OS, Linux, Android — same problem, same scale.

                                  Here’s some of the Fish in a Barrel analysis of Chrome security advisories:

                                  1. 1

                                    This implies an alternative could have been used successfully.

                                    Even today, would anyone dare write a browser in anything but C++? Even Rust is a gamble, because it implies you can recruit a team sufficiently proficient in Rust.

                                    Admittedly, Rust is a solid alternative now. But most companies won’t make the switch for a long time.

                                    Fun exercise: Criticize Swift for being written in C++. Also V8.

                                    C++ is still the de facto for interoperability, too. If you want to write a library most software can use, you write it in C or C++.

                                    1. 13

                                      C++ is still the de facto for interoperability, too.

                                      C is the de facto for interoperability. C++ is about as bad as Rust, and for the same reason: you can’t use generic types without compiling a specialized version of the template-d code.

                                      1. 8

                                        You’re shifting the goalposts here. “No practical alternative to C++” is altogether unrelated to “C and C++ are perfectly safe in disciplined programmers’ hands” which you claimed above.

                                        And no, empirically they are not safe, a few outlier examples notwithstanding (and other examples like Chrome and Windows don’t really support your claim). It’s also illogical to suggest that just because there are a handful of developers in the world who managed to avoid all the safety issues in their code (maybe), C and C++ are perfectly fine for wide industry use by all programmers, who, in your own view, aren’t disciplined enough. Can’t you see that it doesn’t follow? I can never understand why people keep making this claim.

                                        1. -1

                                          Also known as “having a conversation.”

                                          But, sure, let’s return to the original claim:

                                          C and C++ are perfectly safe in disciplined programmers’ hands

                                          Yes, I claim this with no hubris, as someone who has been writing C++ off and on for well over a decade.

                                          I’m prepared to defend that claim with my upcoming project, SweetieKit (NodeJS for iOS). I think overall it’s quite safe, and that if you manage to crash while using it, it’s because you’ve used the Objective-C API in a way that would normally crash. For example, pick apart the ARKit bindings: https://github.com/sweetiebird/sweetiekit/tree/cb881345644c2f1b2ac1a51032ec386d1ddb7ced/node-ios-hello/NARKit

                                          I don’t think SweetieKit could have been made in any other language, partly because binding to V8 is difficult from any other language.

                                          I do not claim at the present time that there are no bugs in SweetieKit (nor will I ever). But I do claim that I know where most of them probably are, and that there are few unexpected behaviors.

                                          Experience matters. Discipline matters. Following patterns, matters. Complaining that C++ is inherently unsafe is like claiming that MMA fighters will inherently lose: the claim makes no sense, first of all, and it’s not true. You follow patterns while fighting. And you follow patterns while programming. Technique matters!

                                          1. 5

                                            Perhaps you are one of the few sufficiently disciplined programmers! But I really can’t agree with your last paragraph when, for example, Microsoft says this:

                                            the root cause of approximately 70% of security vulnerabilities that Microsoft fixes and assigns a CVE (Common Vulnerabilities and Exposures) are due to memory safety issues. This is despite mitigations including intense code review, training, static analysis, and more.

                                            I think you have a point about the impact of these types of issues compared to social engineering and other attack vectors, but I’m not quite sure that it’s sufficient justification if there are practical alternatives which mostly remove this class of vulnerabilities.

                                            1. 1

                                              For what it’s worth, I agree with you.

                                              But only because programmers in large groups can’t be trusted to write C++ safely in an environment where safety matters. The game industry is still mostly C++ powered.

                                              1. 3

                                                I agree with that. I’ll add that games:

                                                (a) Have lots of software problems that even the players find and use in-game.

                                                (b) Sometimes have memory-related exploits that have been used to attack the platforms.

                                                (c) Dodge lots of issues languages such as Rust address with the fact that you can use memory pools for a lot of stuff. I’ll also add that’s supported by Ada, too.

                                                Preventable, buggy behavior in games developed by big companies continues to annoy me. That’s a sampling bias that works in my favor. If what I’ve read is correct, they’re better and harder-working programmers than average in C++ but still have these problems alternatives are immune to.

                                                1. 3

                                                  That, and games encourage an environment of ignoring security or reliability in favour of getting the product out the door, and then no long-term maintenance. If it weren’t for the consoles, they wouldn’t even have security on their radar.

                                                  1. 1

                                                    Yeah. On top of it, the consoles showed how quickly the private market could’ve cranked out hardware-level security for our PC’s and servers… if they cared. Also, what the lower end of the per-unit price might be.

                                        2. 6

                                          Would anyone dare write a browser in anything but C++?

                                          That’s preeeetty much the whole reason Mozilla made Rust. It now powers a decent chunk of Firefox, esp the performance-sensitive parts like, say, the rendering engine.

                                      2. 5

                                        If you don’t know how to free memory when you’re done with it and then not touch that freed memory, you should not be shipping C++ to production.

                                        Did you ever botch up memory management? If you say “no” I am going to assume you haven’t ever used C nor C++.

                                        1. 5

                                          There’s a difference between learning and shipping to production.

                                          Personally, I definitely can’t get memory management right by myself and I’m pretty suspicious of people who claim they can, but people can and do write C++ that only has a few bugs.

                                          1. 5

                                            There’s always one or other edge case that make it slip into prod even with the experts. A toolchain on a legacy project that has no santizer flags you are used to. An integrated third party library with ambiguous lifecycle description. A tired dev on the end of a long stint. Etc etc.

                                            Tooling helps, but anytime you have an allocation bug caught in that safety net means you screwed up on your part.

                                            1. 5

                                              Amen. I thought I was pretty good a while back having maintained a desktop app for years (C++/MFC (yeah, I know)). Then I got on a team that handles a large enterprise product - not exclusively C++ but quite a bit. There are a couple of C++ guys on the team that absolutely run circles around me. Extremely good. Probably 10x good. It was (and has been) a learning experience. However, every once in a while we will still encounter a memory issue. It turns out that nobody’s perfect, manual memory management is hard (but not impossible), and sometimes things slip through. Tooling helps tremendously - static analyzers, smarter compilers, and better language features are great if you have access to them.

                                              I remember reading an interview somewhere in which Bjarne Stroustrup was asked where felt he was on a scale of 1-10 as a C++ programmer. His response, iirc, was that he was a solid 7. This from the guy who wrote the language (granted, standardization has long since taken over.) His answer was in reference to the language as a whole rather than memory management in particular, but I think it says an awful lot about both.

                                              1. 4

                                                My first job included a 6 month stint tracking down a single race condition in a distributed database. Taught me quite a bit about just how hard it is to get memory safety right.

                                                1. 2

                                                  You’re probably the kind of person that might be open-minded to the idea that investing some upfront work into TLA+ might save time later. Might have saved you six months.

                                                  1. 2

                                                    The employer might not have needed me in 2007 if the original author had used TLA+ (in 1995, when they first built it).

                                                    1. 2

                                                      Yeah, that could’ve happened. That’s why I said you. As in, we’re better off if we learn the secret weapons ourselves, go to employers who don’t have them, and show off delivering better results. Then, leverage that to level up in career. Optionally, teach them how we did it. Depends on the employer and how they’ll react to it.

                                                      1. 2

                                                        This particular codebase was ~600k lines of delphi, ~100k of which was shared between 6 threads (each with their own purpose). 100% manually synchronized (or not) with no abstraction more powerful than mutexes and network sockets.

                                                        It took years to converge on ‘only crashes occasionally’, and has never been able to run on a hyperthreaded CPU.

                                                        1. 1

                                                          Although Delphi is nice, it has no tooling for this that I’m aware of. Finding the bugs might have to be a manual job. Whereas, Java has a whole sub-field dedicated to producing tools to detect this. They look for interleavings, often running things in different orders to see what happens.

                                                          “~100k of which was shared between 6 threads (each with their own purpose).”

                                                          That particularly is the kind of thing that might have gotten me attempting to write my own race detector or translator to save time. It wouldn’t surprise me if the next set of problems took similarly long to deal with.

                                            2. 1

                                              Oh yes. That’s how you become an expert.

                                              You quickly learn to stick to patterns, and not deviate one millimeter from those patterns. And then your software works.

                                              I vividly remember when I became disillusioned with shared_ptr: I put my faith into magic to solve my problems rather than understanding deeply what the program was doing. And our performance analysis showed that >10% of the runtime was being spent solely incrementing and decrementing shared pointers. That was 10% we’d never get back, in a game engine where performance can make or break the company.

                                              1. 2

                                                Ok, I take it you withheld shipping code into prod until you reached that level of expertise? I’m almost there after 20+ years and feel like a cheat now ;)

                                            3. 4

                                              It’s funny you mention slow compiles given your alternative is C++: the language that had the most people complaining about compile times before Rust.

                                              Far as other comment, the C++ alternative for a browser should be fairly stable. Rust and Ada are safer. D compiles faster for quicker iterations. All can turn off the safety features or overheads on a selective basis where needed. So, yeah, I’d consider starting a browser without C++.

                                              The other problem with C++ for security is that it’s really hard to analyze with few tools compared to just C. There still isn’t even a formal semantics for it because the language itself is ridiculously complicated. Unnecessarily so given more powerful languages, PreScheme and Scheme48, had a verified implementations. It’s just bad design far as security is concerned.

                                          2. 6

                                            Comparing something as large as an OS to a project like Tarsnap seems awfully simplistic. C has a bad rap because of undisciplined developers, sure, but also because manual memory management can be hard. The more substantial the code base, the more difficult it can get.

                                            1. 6

                                              Tarsnap is written in C

                                              I want a rule in any conversation about C or C++ that nobody defending what can be done in those languages by most people uses an example from brilliant, security-focused folks such as Percival or DJB. Had I lacked empirical data, I’d not know whether it was just their brilliance or the language contributing to the results they get. Most programmers, even smart ones, won’t achieve what they achieved in terms of secure coding if given the same amount of time and similar incentives. Most won’t anyway given the incentives behind most commercial and FOSS software that work against security.

                                              Long store short, what those people do doesn’t tell us anything about C/C++ because they’re the kind of people that might get results with assembly, Fortran, INTERCAL, or whatever. It’s a sampling bias that shows an upper bound rather than what to expect in general.

                                              1. 8

                                                Right. As soon as you restrict the domain to software written by teams, over a period of time, then it’s game over. Statistically you’re going to get a steady stream of CVE’s, and you can do things to control the rate (like using sanitizers) but qualitatively there’s really nothing you can do about it.

                                              2. 4

                                                My frustration to C is that it makes lots of things difficult and dangerous that really don’t need to be. Ignoring Rust as a comparison, there’s still lots of things that could be improved quite easily.

                                                1. 4

                                                  That’s pretty much Zig. C with the low-hanging fruit picked.

                                                  1. 0

                                                    Now we’re talking! What kind of things could be improved easily?

                                                    I like conversations like this because it highlights areas where C’s designers could have come up with something just a bit better.

                                                    1. 9

                                                      That’s easy. Add slices.

                                                      1. 5

                                                        A significant amount of the undefined behavior in C and C++ is from integer operations. For example, int x = -1; int y = x << 1; is UB. (I bet if you did a poll of C and C++ programmers, a majority would say y == -2). There have been proposals (Regehr’s Friendly C, some more recent stuff in WG21) but so far they haven’t gotten much traction.

                                                        1. 5

                                                          I tweeted this as a poll. As of the time I posted the answer, 42% said -2, 16% correctly said it was UB, another 16% said implementation defined, and 26% picked “different in C and C++.” Digging a little further, I’m happy to see this is fixed in the C++20 draft, which has it as -2.

                                                          1. 1

                                                            Agreed; int operations are one area I find hard to defend. The best I could come up with is that int64_t should have been the default datatype. This wouldn’t solve all the problems, but it would greatly reduce the surface.

                                                      2. 4

                                                        I wonder about how well C maps to machine semantics. Consider some examples; for each, how does C expose the underlying machine’s abilities? How would we do this in portable C? I would humbly suggest that C simply doesn’t include these. Which CPU are you thinking of when you make your claim?

                                                        • x86 supports several extensions for SIMD logic, including SIMD registers. This grants speed; performance-focused applications have been including pages of dedicated x86 assembly and intrinsics for decades.
                                                        • amd64 supports “no-execute” permissions per-page. This is a powerful security feature that helps nullify C’s inherent weaknesses.
                                                        • Modern ARM support embedded “thumb” ISAs which trade functionality for size improvements. This is an essential feature of ARM which has left fingerprints on video game consoles, phones, and other space-constrained devices.

                                                        Why is software slow? This is a sincere and deep question, and it’s not just about the choice of language. For example, we can write an unacceptably-slow algorithm in any (Turing-equivalent) language, so speed isn’t inherently about choice of language.

                                                        I remember how I learned to hate C; I wrote a GPU driver. When I see statements like yours, highly tribal, of forms like, “try writing [a native-code object with C linkage and libc interoperability] in a language other than C[++],” I wonder why you’ve given so much of your identity over to a programming language. I understand your pragmatic points about speed, legacy, interoperability, and yet I worry that you don’t understand our pragmatic point about memory safety.

                                                        1. 4

                                                          It was designed specifically for the advantages and limitations of the PDP-11 on top of authors’ personal preferences. It’s been a long time since there was a PDP-11. So, the abstract machine doesn’t map to current hardware. Here’s a presentation on its history that describes how many of the “design” decisions came to be. It borrowed a lot from Richard’s BCPL which wasn’t designed at all: just what compiled on even worse hardware.

                                                          1. 1

                                                            I keep hearing this trope, but coming from the world of EE, I’m readily convinced it is false. C never was designed to give full access to the hardware.

                                                            1. 1

                                                              The K&R book repeatedly describes it as using data types and low-level ops that reflect the computer capabilities of the time. Close to the machine. Then, it allows full control of memory with pointers. Then, it has an asm keyword to directly program in assembly language. It also was first used to write a program, UNIX, that had full access to and manipulated hardware.

                                                              So, it looks like that claim is false. It was designed to do two things:

                                                              1. Provide an abstract machine close to hardware to increase (over assembly) productivity, maintain efficiency, and keep compiler easy to implement.

                                                              2. Where needed, provide full control over hardware with a mix of language and runtime features.

                                                              1. 1

                                                                Yet even the PDP-11 had a cache. C might have been low enough to pop in to assembly or write to an arbitrary memory position, but that does not mean it ever truly controlled the processor.

                                                                1. 1

                                                                  That would be an exception to the model if C programmers routinely control the cache. It wouldn’t be if the cache was an accelerator that works transparently in the background following what program is doing with RAM. Which is what I thought it did.

                                                                  Regardless, C allows assembly. If instructions are availsble, it can control the cache with wrapped functions.

                                                          2. 1

                                                            In my experience, C is very close to how the processor. Pretty much every C “phoneme” maps to one or two instructions, making it very close to how your processor actually works. The assembly is a bit more expressive, especially when it comes to bit operations and weird floating point stuff (and loads of weird, speciallized stuff), but C can only use features it can expect any reasonable ISA to have. It usually is easily extensible to accomodate the more specific things, far easier than most other languages.

                                                            About your three examples:

                                                            1. Adding proper support for SIMD is difficult, because it is very different between architectures or between versions of an architecture. The problem of designing (in a perfomant way, because if someone is vectorizing by hand, perfomance is important) around these differences is hard enough that I haven’t seen a good implementation. GCC has an extension that tries, but it is a bit of a PITA to use (https://gcc.gnu.org/onlinedocs/gcc/Vector-Extensions.html#Vector-Extensions ). There are relatively easy to use machine specific extensions out there that fit well into the language.

                                                            2. If you malloc anything, you’ll get memory in a non-executable page from any sane allocator. If you want memory with execute permissions, you’ll have to mmap() yourself.

                                                            3. Thumb doesn’t really change the semantics of the logical processor, it just changes the instruction encoding. This is almost completly irrelevant for C.

                                                            You can of course argue that most modern ISAs are oriented around C (I’m looking at you, byte addressability) and not the other way around, but that is a debate for another day.

                                                            1. 2

                                                              “Adding proper support for SIMD is difficult, because it is very different between architectures or between versions of an architecture.”

                                                              There’s been parallel languages that can express that and more for years. C just isn’t up to the task. Chapel is my current favorite given all the deployments it supports. IIRC, Cilk language was a C-like one for data-parallel apps.

                                                              1. 1

                                                                Cilk is a C extension. Also, it is based upon multithreading, not SIMD.

                                                                1. 1

                                                                  Oh yeah. Multithreading. Still an example of extending the language for parallelism. One I found last night for SIMD in C++ was here.

                                                          3. 3

                                                            I see your point regarding people sh*tting all over C/C++. These are clearly good languages and they definitely have their place. However, I work with C++ pretty frequently (not low-level OS stuff, strictly backend and application stuff on Windows) and I’ve encountered a couple of instances in which people way more capable than I am managed to shoot themselves in the foot. That changed my perspective.

                                                            To be clear, I also love C (mmmm…less C++), and I think that most developers would do well to at least pick up the language and be able to navigate (and possibly patch a large C codebase.) However, I’d also wager that an awful lot of stuff that is written in C/C++ today, probably doesn’t need the low level access that these languages provide. I think this its particularly true now that languages like Rust and go are proving to be both very capable at the same kinds of problems and also substantially little less foot-gunny.

                                                          4. 2

                                                            This is a bit tangential, but your link for “Modern C++ Won’t Save Us” points to the incorrect page.

                                                            It should point to: https://alexgaynor.net/2019/apr/21/modern-c++-wont-save-us/

                                                          1. 3

                                                            This is so cool! Anyone have a link to why India’s costs are so low? What are they doing better than SpaceX? Is it just a lower cost of labor?

                                                            1. 6

                                                              Engineers are paid extremely poorly in India. Even engineers with years of experience are paid below the United States’ one-person household federal poverty line (which was $12,490 in 2019).

                                                              Regarding software jobs: If you graduate with a Bachelor’s degree in Computer Science or a related field, entry-level software engineering jobs pay is around INR 25,000 per month (that’s $4,350 per year) in the big cities, and around INR 15,000 per month ($2,610 per year) in smaller cities. After 3-5 years of experience, their salary might rise to approximately double (e.g. to around $8,500 a year in big cities), but after that it grows very slowly.

                                                              Cost of living isn’t really that much lower in India, so the poor salaries aren’t justified. Rent and food is cheap, but everything else is more expensive. Things like laptops and smartphones are considered luxury goods, are usually all manufactured abroad, and subject to exorbitant tariffs and taxes. An identical laptop will cost 30% to 40% more in India versus the U.S., thanks to taxes. Therefore, software engineers making these abysmal wages, often have to save up for several months to buy a $300 laptop. Compare that to the $149,000 average base salary in the U.S. according to TripleByte: https://triplebyte.com/software-engineer-salary The ratio is around 20x to 40x.

                                                              1. 3

                                                                This is a heavily IT skewed perspective about jobs and salaries. ISRO employs more than “IT” people. Looking at every job through the lens of IT is a weak take.

                                                                1. 2

                                                                  Very interesting numbers. Thanks for sharing!

                                                                  1. 4

                                                                    The parent comment is a generic comment about IT salaries.

                                                                    ISRO employees are central government employees, while their take home salary might look small, they have other benefits like government provided housing (in some places, not in big cities), govt paid health care/reimbursements, access to central govt run schools, very good maternity leave (2 of the mission directors are women). And their salaries are adjusted for inflation (which the private sector IT people may not get; their fortunes are tied to their company perf & Rs/$ exchange rate etc.,).

                                                                    Does a central govt employee make more than equally talented and well-placed IT person? Likely no. But they do enjoy stability (job for life), and other perks.

                                                                    1. 6

                                                                      My general point is that salaries are abysmal in India.

                                                                      their fortunes are tied to their company perf

                                                                      The fact that salaries can increase and especially decrease is likely a uniquely Indian phenomenon. I have literally heard of people’s actual salary being decreased by 30% to 40% because the company was going through a tight spot. This sort of thing is unheard of in the U.S.

                                                                      At the top American tech companies, you usually get significant stock grants or stock options that are directly tied to the company’s performance, but your salary never varies based on company performance.

                                                                      My last job paid a $140,000 base salary, $60,000 in stock options, $8,400 in free fully-vested retirement account contributions, top-of-the line health coverage with Aetna and One Medical (that likely cost the company at least $12,000 a year), $2000 in healthcare copay reimbursements, at least $5,000 a year in free lunches (via served by stadium), another $1000 a year in gym reimbursement. The stock options’ value is entirely dependent on the company’s growth, but if the company was even moderately successful, my total compensation would’ve easily been over $200,000 a year. And, I only have a Bachelor’s degree + a few years’ of experience. Although, I should add, my visa for this was denied; something that I wrote about in: https://www.reddit.com/r/h1b/comments/buesue/denial_by_uscis_for_140k_salary_software_engineer/

                                                                      govt paid health care/reimbursements, access to central govt run schools

                                                                      You just reminded me of the fact that health insurance isn’t included with most jobs (at private companies) in India. Insurance, in general, isn’t common. People are always expected to pay out-of-pocket for all their health care. Also, I assume that ISRO employees only get care at govt-run hospitals. Most of my relatives in India (who are generally very wealthy, by Indian standards) never use govt-run hospitals or schools. They exclusively go to “expensive” private hospitals, and their kids go to specialized private schools that have a heavy emphasis on college entrance exam training (with an IIT-JEE focused curriculum). Of course, these private schools and hospitals are dirt cheap by American standards.

                                                                      I think the situation in India is really sad. It pains me personally to even think about it. Talented engineers and scientists are being paid peanuts in India, compared to practically every country on Earth.

                                                                      1. 7

                                                                        If you peruse some of the salary threads in places like HN, you will find that salaries for software engineers is abysmal even in Western European countries (culturally closer to US). US is an outlier that way.

                                                                        1. 1

                                                                          Wow, your story is crazy. I’m so sorry you are going through that. Is there a way to appeal? What’s the latest?

                                                                          1. 2

                                                                            It’s possible to appeal, but an appeal would take between 6 months to 2 years–time that HyperScience simply can’t keep waiting for me. It’s been quite devastating for me. I feel I’m being forced out of my home of over a decade. I’ve been in the U.S. most of my adult life, so it truly does feel like home. It’s hard to describe the trauma of dealing with the horrific U.S. immigration system.

                                                                            Anyways, I’ve been keeping myself busy lately by working on personal projects. One of my favorite topics to daydream about has always been programming language designing. So I’ve decided to throw myself into it.

                                                                            Right now, I’m actually reading the LLVM documentation[1]. As a kind-of toy/learning project, I’ll be writing a LISP compiler (or rather, transforming a LISP interpreter I’ve already written, into a LISP compiler). I feel like I rarely see new LISP compilers. It’ll be fun to create a LISP-to-native-code compiler via LLVM.

                                                                            [1] Specifically, this: https://llvm.org/docs/tutorial/MyFirstLanguageFrontend/LangImpl03.html

                                                                            1. 2

                                                                              Very cool! You probably are already on it, but if not this is one of my favorite subreddits: https://www.reddit.com/r/ProgrammingLanguages/. It’s lots of folks helping each other out building languages.

                                                                              You’ve probably also seen this before, but if not I’d recommend “Realm of Racket”, and “The Definitive ANTLR4 Reference”, which were 2 of my favorite books when I started to get into PLD.

                                                                              Edit: So sorry about the immigration situation! I do hope it resolves beneficially soon and I know there’s probably nothing I can do (but let me know if there is!). Two of my close friends have been in similar situations (one a DREAMER and the other a postdoc), and it’s just so unfair to them and counterproductive to our country. I love your approach to instead focus on the personal projects instead of that ridiculous situation. As the saying goes “what you focus on increases”.

                                                                1. 4

                                                                  It seems like what they’re offering here is similar to IncludeOS: https://www.includeos.org/

                                                                  IncludeOS lets you turn a C++ program into a bootable single-application OS image.

                                                                  Same goal as Nanos, but restrained to one language: C++.

                                                                  1. 3

                                                                    So instead you should keep your developers working in a hovel with a giant bullpen, paint peeling on the walls, stained and smelly carpet, and so on…because “oh god they may think they can actually go home on the weekends now”?

                                                                    I can see the point of not patting yourself on the back or focusing on how far you have come, and instead keeping focused on what you want to do, but the way the article was couched seems a bit questionable to me.

                                                                    1. 3

                                                                      Indeed. Correlation and causation and all that.

                                                                      “Our exec staff spent time worrying about who had the corner office.” Yeah, see, the poison was already there. You don’t think they were waging turf wars before the new office?

                                                                      I would rephrase the article entirely. At the point where you have grown to need real offices, you will have inevitably hired some bad apples. They will fuck your company up. The new building is not a problem; it’s a handy reminder to review who and what your company has already become.

                                                                      1. 2

                                                                        I would upvote more if I could.

                                                                        Nearly all startupish companies I have been at that “failed”, I think could be traced to poor (or outright awful) management hires. Hiring (especially mid to senior management) is one of the most important things you do as a company, and often times it seems to be one of the most lackadaisical. It is often just pushed off on human resource.

                                                                        Another important thing that is often overlooked at a company, is firing bad hires.

                                                                        1. 1

                                                                          Another important thing that is often overlooked at a company, is firing bad hires.

                                                                          What kind of negative consequences have you seen arise out of this (a tendency to not fire bad hires / keep them too long)?

                                                                          Also, why do companies hesitate to let go of a bad hire? Has it got to do with the law? Or it something that’s got to do with just how companies operate today?

                                                                          1. 4

                                                                            Incompetence is hard to spot. In my experience, it goes something like this. Bad high manager promises the moon, fails to deliver. Finally CEO says changes need to be made. Word is passed down. Low level manager fires a bunch of otherwise competent engineers who were being led around in circles. CEO is told changes have been made.

                                                                            After the next failure, high manager fires the low managers. Reports back to CEO the real problem has been identified.

                                                                            After the third failure, CEO realizes high manager is incompetent and fires him. By the time this happens, the engineering staff, culture, and morale have been completely decimated.

                                                                            This kills startups because new CEOs are told the key to success is delegation. So they delegate their company to death and shut their ears to the screaming. Skip level meetings are vital. I’d say for any company under 100, the CEO should be personally talking to at least every team, if not every individual.

                                                                            1. 1

                                                                              Nicely stated.

                                                                              I’d say for any company under 100, the CEO should be personally talking to at least every team, if not every individual.

                                                                              I very much agree. I think, at the very least, the CEO of smaller companies would be well served having some regular ‘open office hours’.

                                                                            2. 1

                                                                              What kind of negative consequences have you seen arise out of this (a tendency to not fire bad hires / keep them too long)?

                                                                              Developers writing awful code that is hard to maintain, and/or not pulling their weight and resulting in poor team morale, buggy code, frustrating and needlessly long code review cycles, combative attitudes, and so on. Consequences are usually poor quality and poor team morale. This tends to be dealt with more readily though, as they are usually “lower rung” in the corporate hierarchy.

                                                                              Managers alienating their reports, and causing exodus of valued employees, poor morale, customer relations issues resulting in lost contracts and accounts. Depending on their level and influence, they may be hard to dislodge.

                                                                              Also, why do companies hesitate to let go of a bad hire? Has it got to do with the law? Or it something that’s got to do with just how companies operate today?

                                                                              I think some of it was just that management got along well with each other (“oh cut him a break, that is just the way he is”, and some of it was cronyism (“I have worked with him so long”, “pal of the ceo”, etc) and some perhaps fear of change in general. Some of it may be due to law as well. Depending on the jurisdiction or country, it can indeed be hard to fire someone. Some of it may be cost/benefit. Possibly a contractual thing with severance requirements for early dismissal, so they will just ignore bad behavior until some timeframe passes. I am sure some of it is just as a company gets to a certain size, people can just run under the radar to a certain extent too.

                                                                      1. 4

                                                                        Everything in this article is so contrary to what I’ve heard. I read a book recently (don’t recall its name), where the primary point was, beyond anything else — giving programmers offices of their own. This one simple thing seemed to be massive productive booster. It’s also the number one thing on the Field Guide to Developers by Joel Spolsky of Joel on Software fame.

                                                                        He also wrote a short piece called the Private Offices Redux, where he says:

                                                                        Not every programmer in the world wants to work in a private office. In fact quite a few would tell you unequivocally that they prefer the camaradarie and easy information sharing of an open space.

                                                                        Don’t fall for it. They also want M&Ms for breakfast and a pony. Open space is fun but not productive. Last summer, the Project Aardvark interns were all in a big open space. The net result was that there was no such thing as a conversation between two people. Every time I went out there to talk to one of them, it became a conversation with all of them; every time two people had to talk, instead of going off to a quiet space somewhere, they just spoke directly to each other, interrupting the other two’s concentration. Although this slightly helps keep everyone “in the loop,” it also knocks programmers out of flow causing them to lose their concentration and devastating productivity, so I prefer to keep people in the loop using more formal methods, like weekly email status reports, and through informal methods like eating lunch together every day, which is why we have free catered lunches and a really big table.

                                                                        The above quote really says it all. If you were unsettled about your opinion with regard with regard to private offices, the above quote should have fixed.

                                                                        If I were to start a startup (something I would like to, eventually) — private offices would be an nonnegotiable requirement. Even if that meant taking up an office in a less expensive area, I’d make sure everyone got an office of their own (even though it might be small).

                                                                        You’d be amazed at how tiny the offices that my professors at my university’s Department of Computer Science were. We’re talking about brilliant people here. They were given small offices. So I thinks it’d be okay if a bunch of startuppers had tiny offices (at least during the budding stages of the startup). But an office is something they must all have.

                                                                        1. 3

                                                                          Everything in this article is so contrary to what I’ve heard. I read a book recently (don’t recall its name), where the primary point was, beyond anything else — giving programmers offices of their own.

                                                                          I think the book you are thinking of may be PeopleWare.

                                                                          1. 1

                                                                            Yes, that was the book. Thanks for pointing out! :–)

                                                                            I also remember reading somewhere that, this book was a must-read at Microsoft, at least in its early days. Don’t know if things have changed now at MS. (The second article I linked into in the grand-parent comment: Private Offices Redux seems to suggest it might have.)