1. 3

    Posting this mainly to post my disagreement with Titus.

    A C++ standard library without a hash table is the kind of insanity only a true academic could propose ;)

    My perspective: I joined the C++ believers about three years ago during the port of a large OO codebase. I have maintained and further developed this codebase since. At first C++ was quite challenging and unfamiliar, and without having a reasonably trustworthy standard library it would have been much more difficult. Now that I have several years behind me, it has clicked, and I find I prefer it in many contexts. To be fair, part of that is the familiarity I feel with C++, because I have known C fairly well for decades. It feels right, like an old power tool without quite so many safety switches. Dangerous, and efficient in a deft hand.

    I agree that C++ is not a great beginners language, but if you close the doors to transitional programmers you will kill the language.

    And as for std::map WRT stdlib ossification, just swallow your dang pride and specify std::map2 and let’s all move on, happier, and faster, and cleaner.

    Bonus prediction: within the next few years there we will see C++ “borrow checkers” come online, and at least one major compiler will support borrow checking, on some basis.

    1. 3

      A C++ standard library without a hash table is the kind of insanity only a true academic could propose ;)

      The problem is that to have a hash map, you must decide how to implement it and you can’t necessarily please all C++ users. For example, by default, Qt’s QHash doesn’t work without a cryptographic-grade random number source. Probably the right choice for many applications, where security might be a concern, but not appropriate for low level embedded systems.

      you will kill the language.

      As a professional C++ programmer, I am very much in favour of this.

      1. 2

        I’m working on a large C++ open source project. I originally chose C++ because I need a high performance systems language, and because the libraries I need are written in C++. My most recent requirement is to compile most of my code into WebAssembly, which C++ supports. I’m not super happy with C++, I’d consider rewriting in another language if there was a better choice. My wish list of language features would be:

        • Memory safe by default, lets you write small amounts of unsafe code encapsulated behind an API, in order to get high performance for a special data structure. (Like Rust.)
        • Generates small, fast WebAssembly code. (C is good for this; there isn’t tons of code bloat from the standard library and template expansion.)
        • Easy integration with C++ libraries, because of these highly specialized libraries I need, not available in other languages. (C++ is the only obvious choice.)
        • Easy to use, low friction coding. (Rust is the opposite of this, and C++ can be frustrating as well.) C is a simple language, but I used to be a C programmer, and I don’t want to go back to having to manually manage reference counts in my reference counted objects, which are pervasive in my current code. I’d prefer an easy to use, memory safe language.
        1. 1

          If you haven’t already, have a look at D. I haven’t used it enough to recommend it yet, but it might fit some of your requirements:

          If you have already considered/tried D, I’d be interested to hear how well it lived up to its promises.

          1. 2

            Thanks for recommending D. It may not be suitable, though: I think I need a non-garbage-collected systems language. I’m implementing a dynamically typed language, and using D with garbage collection would prevent me from using either tagged pointers or NaN boxing to efficiently represent boxed values, because that would confuse the D garbage collector, causing memory corruption. Not using the garbage collector apparently means I can’t use the D standard library, which might be a problem. With no GC, I need reference counted smart pointers, which are not supported by D. D has RefCounted, but that doesn’t support instances of classes, or destructors, which makes it pretty useless for my purposes.

            The C++ interop looks good, better than other non-C++ languages I have considered.

            Another alternative that I haven’t researched is to use a dynamically typed language with an optimizing, ahead-of-time compiler, fast enough so that my own interpreter runs at acceptable speed. I would benefit from the underlying GC and tagged pointer support for efficient boxed objects: no need to implement any of that myself. But then I still need good webassembly support (with compact executables) and some story for C++ interop.

        2. 1

          Sounding flippant, but not meaning to be flippant:

          If you want C++ with fewer features or library support, have you considered C?

          Basically, and allowing room for me to be bad, I want Java without garbage collection and with pass by value and good multiplatform support, including a serviceable standard library.

        3. 1

          I agree with Titus. As he points out, C++ has a standard hash table (std::unordered_map), but it’s not very good, and it can’t be fixed to make it good, so you should use an external library (like Abseil) instead to get a decent hash table implementation. But C++ has the worst story for managing external dependencies in the industry. As Titus said, it’s chaos. And that’s why suggestions to use high quality external libraries fall flat.

          I looked at a draft of the proposed C++ graphics library about a year ago, and it was shockingly incompetent. The authors did not understand how colour spaces worked, I noticed, so it would be impossible to get correct results using the APIs they proposed. Maybe that’s improved, but after reading that draft, I don’t trust the process, and I would rather use a C++ graphics library that is written by industry veterans and used in real products.

          Take a look at Rust, which has a relatively small standard library, plus a really really good package manager and build system. Rust will never have graphics in the standard library, for good reason. If it is really easy to discover the best external C++ libraries, and it is really easy to add a library you discovered to a project you are working on, then that’s a better situation than including half-baked, amateurish standard library facilities (which can never be fixed to work correctly once there is experience with them in the field). And that’s also what Titus said.

          1. 1

            I looked at a draft of the proposed C++ graphics library about a year ago, and it was shockingly incompetent. The authors did not understand how colour spaces worked, I noticed, so it would be impossible to get correct results using the APIs they proposed. Maybe that’s improved, but after reading that draft, I don’t trust the process, and I would rather use a C++ graphics library that is written by industry veterans and used in real products.

            I won’t argue with that, I agree, graphics is going too far. But saying that the old hash table is bad and therefore we shouldn’t have a stdlib hash table seems to me like a non-sequitor, it’s not like the std namespace is full and cannot accept a new symbol.

            Take a look at Rust

            I have. And I got as far as hyper and tokio and walked away after spending hours (days?) trying to spin up a small web server capable of handling some basic client reporting.

            To be clear, rust itself is fine, so let’s don’t crucify me. And I would have used it for the port I mentioned if the platform tooling had been there, but it wasn’t, and it was for C++.

          2. 1

            If they start stripping things like hashtables out of the STL, I foresee a npm-style apocalypse of future C++ projects.

            Overall, though, this feels rather luddite-esque. A step before saying “We don’t need no stinkin’ C++! C can do everything without all the nonsense!”

            Edit: I think this also punts the issue of on-boarding new programmers. It’s already difficult enough to explain that you really have to learn THREE systems: the language, the STL, and the tool-chain. Now we’d just be moving more complexity from the STL into the tool-chain.

          1. 3

            I’m definitely rooting for these guys. I’m hopeful that their combined HW/SW approach will result in \sustainable free software development funding.

            1. 6

              I took a class from Jeff, he was the kind of teacher that made me want to be a better student.

              1. 3

                Though cliche, I think nuking the escape key is a courageous thing to do. Use is limited, alternatives are available, the physical cost is relatively high, but it’s steeped in tradition and probably has vocal support.

                1. 11

                  Keeping the Caps Lock key is real courage.

                  1. 3

                    GOOD POINT~

                  2. 5

                    It depends on the benefits that are made available by removing the key. Doing away with the floppy drive was courageous because it freed up precious space in laptop cases and generally enabled better industrial design. It also helped push people toward more reliable and convenient forms of portable storage. If that OLED strip up at the top doesn’t have some pretty amazing benefits, I don’t think the loss of the Esc and Function keys will have been worth it.

                    1. 1

                      2 years on, and having had one for a few months, I can say safely good lord is the touch bar the exact antithesis of “Pro.”

                      Nothing says “garbage” like trying to single step in a debugger and needing to a) constantly look at your finger to make sure you’re hitting F8, and b) occasionally toggle the mode of the function keys because the OS doesn’t always remember or respect the per-application preferences.

                      It even sucks as a user-friendly UX, because the brightness at normal viewing angles is like 15% that of the screen so you have to squint to even see what the icons are. Terrible.

                      Ugh and the constant ghost touches due to it not being a tactile switch, also terrible.

                      Touch ID on the MBP OTOH, works great.

                    1. 2

                      One straightforward answer for a (small) step forward seems to be to publish libsyscall with a dependency on another library, glibc maybe, or maybe libsyscall_support or librust_runtime or whatever.

                      Can always go ahead and publish libsyscall_doesn’t_need_support later. But the kernel not providing a user space API for it’s syscalls seems kind of like an oversight. Do they already publish a header or something to wrap unexposed syscalls if you already depend on libc? Maybe that’s what I’m missing.

                      1. 1

                        As I understand, you already can call all syscalls. The issue is it doesn’t look pretty when you do so.

                      1. 2

                        How did it come to be like this? I don’t imagine this has anything to do with efficiency, judging by the amount of labour (on the employer’s end) exerted to make candidates jump through hoops.

                        1. 8

                          Nobody wants to take a risk and get blamed for a bad hire, so they set up more and more process. It’s like sifting for gold, except you have like twenty pans and you throw away everything that doesn’t make it through any of the sifters without looking.

                          1. 3

                            That explanation seems plausible, but then I wonder, why is the process so much more heavyweight in tech than just about any other field, including other STEM fields? In sheer number of hours of interviewing that it takes to get a job, counting all the phone screens, take-home assignments, in-person interviews, etc., tech is way out of sync with norms elsewhere. A typical hiring process is any other STEM field is a resume screen, followed by one phone screen (typically an hour), followed by an on-site interview that can last somewhere between a few hours and a full day.

                            1. 8

                              Survivorship bias could be why. The ones perpetuating this broken process are those who sailed through it.

                              There’s also a lot of talent floating around, and the average company won’t be screwed by an average hire. So even if you miss out on that quirky dev with no social skills but the ability to conjure up a regex interpreter solely from memory, it doesn’t really matter to them.

                              It should matter to startups, though, because hiring average devs means you’ll fail.

                              1. 3

                                Depends on the startup; until you have product-market fit, you don’t need amazing engineers so much as you need people who can churn out prototypes fast.

                              2. 4

                                It might be partly due to the volume of applicants. With tech you have:

                                1. Massive concentration of talent (e.g silicon valley)
                                2. Remote work

                                For those reasons you can often get hundreds of applicants to a posting. Other STEM disciplines don’t support working remotely, and in some cases (think civil engineering) need their engineers to be physically on-site. I’d wager they tend to be much more dispersed around the country and companies can only draw from the local talent pool.

                                1. 3

                                  I applied to a remote London based, three-person not-a-startup. I did the homework and got among the 50 or so people they interviewed on phone. They told they got over 2000 applications.

                                2. 4

                                  Particularly in other STEM fields it’s pretty common to have more rigorous formal education requirements as part of the hiring bar (either explicitly or by convention.) Software development has always been somewhat more open to those from other backgrounds, but the flip side to that is that there seems to be a desire to set a higher performance/skills bar (or at least look like you are) as a result. There are potentially pros and cons to both.

                                  I’d also wonder, particularly around the online tests/challenges/screenings/etc…, whether this is a result of tech people trying to come up with a tech solution to scale hiring the same way you’d approach scaling a technological system, and the resulting expansion in complexity.

                              3. 4

                                Hiring is hard, and a lot of work, and not something most engineers will willfully dive into. Therefore, at most companies, as much as possible of the hiring process gets farmed out to HR / management. And they do the best job they can, given their lack of domain knowledge. Unsurprisingly, they also favor potential employees that they think will be “good” based on their ability to sit, stay, heel, and jump through hoops. Fetch. Good boy. Who wants a cookie. ;)

                                Another take: Mistakes really, really, suck. And if you just add more analysis and testing to a hiring process you’re more likely to spot a problem in a candidate.

                                1. 2

                                  I think mistakes are a big part of it. Software work is highly leveraged: what you write might run hundreds, thousands or millions of times per day. Being a little off can have big downstream consequences.

                                2. 4

                                  I think it’s partly because there’s no training for it in most jobs, it’s very different to expertise in software, it’s very unclear what best practices are (if there are any), and for a lot of people it’s a time suck out of their day, when they’ve already got lots of work to do.

                                  So you end up with these completely ad-hoc processes, wildly different from company to company (or differing even person to person during the interview), without anyone necessarily responsible for putting a system in place and getting it right.

                                  Not to mention HR incentives may not align (points / money for getting someone hired) with engineering, and then you’ve got engineers who use the interview as a way to show off their own smarts, or who ask irrelevant questions (because though you all do code review, no-one does interview question review), or who got the interview dumped on their plate at the last minute because someone else is putting out a dumpster fire, and they’ve never heard of you or seen your resume before they walk into the room…

                                  And decision making is ad-hoc, and the sync-up session after the interview gets put off for a couple of days because there’s a VP who wants to be on the call but they’re tied up in meetings, and in the meantime the candidate has an interview with another company so you’ve just moved forward with booking the onsite anyway…

                                  So many reasons :)

                                  1. 2

                                    It’s all marketing.

                                    I don’t think I would have taken any of my jobs if the recruiters were like “we’re not going to bother interviewing you because all we have is monkey work, when can you start?”, even though in hindsight that would have been totally adequate.

                                    So companies play hard to get and pretend 99% of their applicants are too bad to do the jobs on offer, when the reality is closer to the opposite.

                                    1. 1

                                      <[apple] insert [google] company [facebook] here [microsoft]> only hires the best.

                                    1. 1

                                      Pretty cool to just show up with a 10-15% performance improvement, possibly applicable to a wide range of programs. I wonder what instruction cache misses are like for e.g. browsers.

                                      1. 4

                                        One thing that the Terminal app in Mac OS X has always done extremely well is to automatically rewrap text when resized - even Linux terminal window apps, numerous though they may be, don’t seem to handle that well. Windows wouldn’t even let you resize the damn window - hopefully that will be fixed now!

                                        1. 8

                                          Windows console resizing is already fixed.

                                          1. 3

                                            Doesn’t Gnome and XFCE’s Terminal usually do this?

                                            And it they don’t, one can always use dvtm. In combination with st one can get a very lightweight environment (requires less resources than xterm, for example) that’s actually surprisingly nice to use.

                                            1. 1

                                              ST sounds nice. I’ll give it a whirl. Thanks :)

                                            2. 1

                                              Yeah, Terminal.app is amazing

                                            1. 5

                                              It’s platform strategy. If Apple stays with OpenGL (or adopts Vulkan) they are at the mercy of Khronos, and Khronos may not do a good job (in the long run – in the short run Vulkan seems pretty good.)

                                              Microsoft is never going to give up on Direct X, it grants incredible market control. $$$. OpenGL is owned by committee and can’t be relied on to be competitive (something the last 20 years of history demonstrated, and since Khronos controls Vulkan there’s a substantial possibility committee politics destroy it too in the long run.)

                                              3D engines all require multiple rendering backends anyways (DX9/11/12, OpenGL ES 2, ES 3, ES 3.1, desktop GL) so the incremental cost of implementing another backend for Metal is low. The cognitive cost is also low because Metal/DX12/Vulkan are similar. So Metal is better (for Apple) than Vulkan or OpenGL because they get complete control. Complete control is where Apple likes to be, they have a history of delivering strongly when they have complete control. Although Apple’s values rarely align with the prosumer, they do a good job at fulfilling their own vision.

                                              Really the main disadvantage of deprecating OpenGL on OSX is all the heckling from armchair quarterbacks, but let’s be honest, Apple DNGAF about the nerdygentsia’s opinion ;)

                                              If you’re an indie or OSS dev either you’re not doing high end rendering and OpenGL remains fine, or you can just use MoltenVK/GL and call it a day, it’s not worth getting angry about IMHO.

                                              1. 8

                                                Doesn’t seem like the problem is ever identifying the flaws in the organization, the flaws are everywhere, easy to find.

                                                And it usually feels like the flaws come from the top of the org. It’s hard (impossible?) to change your boss. The problem is finding and settling on the organization with the most acceptable set of flaws.

                                                1. 4

                                                  It’d be great if they’d just release a ‘bizdev free’ version on GoG for $10 or whatever.

                                                  1. 1

                                                    I’m kind of excited about payment request, would that integrate with e.g. Apple Pay? Reducing the overhead to paying sites is one of the things that I think could turn the web around. I have wished that e.g. Firefox would put a “$1” button on their toolbar, which would allow you to just give the site a dollar. Practical problems aside, could really improve the best parts of the web.

                                                    1. 1

                                                      PaymentRequest does support Apple Pay and is also supported by Google, Samsung and Microsoft at least - so building a PWA with in-app purchases is very much possible now

                                                      As a side note, I actually built such a browser button that you mention when I was at Flattr + investigated ways to identify the rightful owner of that page so that they could claim the promise of a donation. We never got it to fully work on all sites, but it worked for some of the larger silos, like Twitter and GitHub, and also worked for those who had added rel-payment links to Flattr, but we/I investigated having it crawl peoples as well public identity graphs to try and find a connection between the owner of a page (through eg rel-author link) and a verifiable identity – like their Twitter account or maybe some signed thing Keybase-style. That ended up with the creation of https://github.com/voxpelli/relspider but the crawler was never fully finished (eg. smart recrawling was never implemented) and never put into production. I still like the idea though.

                                                    1. 1

                                                      I look forward to a possible future where competitive pressure forces a management shakeup at Qualcomm, or even splitting the radio business from the SoC business.

                                                      1. 1

                                                        Just curious, where did you find this link?

                                                        1. 1

                                                          Sorry, I don’t recall now :(

                                                        1. 9

                                                          Why do people think that CDN infrastructure should be radically neutral? Access to the internet does seem increasingly equivalent to free speech, but does that equate to a right to access other people’s syndication systems?

                                                          EFF quote: “Because Internet intermediaries, especially those with few competitors, control so much online speech” – eh? No one is stopping anyone from hosting their own content, and it’s not even hard. You don’t need a CDN to exercise speech. You need a CDN to reach a broad consumer audience. Granted I’d feel a little different if their ISP pulled the plug on them, but that’s not what we’re talking about.

                                                          1. 3

                                                            I’m actually way more concerned about the fact that DNS (which is a government-operated system) is gated behind the arbitrary control of private registrars, who have recently used that control to censor people. The ideal solution is to switch away from centralized DNS (a la namecoin), but until then there should be rules preventing denial of access to this public system.

                                                            1. 1

                                                              there should be rules

                                                              Enforced by whom? How would the trans-national procedures look?

                                                          1. 6

                                                            I think Intel is running out of gas. Their process lead and their design lead are both smaller than ever. Qualcomm and AMD are almost as good, and perfectly happy undercutting Intel. (Never mind that Apple is miles ahead of all three in terms of low power performance, but is mostly irrelevant to Intel because they don’t compete directly.)

                                                            This is just the Intel management lining up the lawyers to try to defend entrenched markets because they don’t have much technological advantage left.

                                                            1. 40

                                                              Long ago and far away, we had these things called “shared libraries” which allowed one to build code and reuse it, so that even if the build process was very long and complex, you only had to do it once. An elegant solution from a more civilized time.

                                                              1. 2

                                                                Elegant like a spiderweb.

                                                              1. 33

                                                                I think that’s missing the point I was trying to make. Even if 75% of them happened due to us using C, that fact alone would still not be a strong enough reason for me to reconsider our language of choice (at this point in time).

                                                                Ironically this is missing the point that memory safety advocates are trying to make. The response the post got is less about Stenberg’s refusal to switch languages (did anyone actually expect him to rewrite?), and more about how he is downplaying the severity of vulnerabilities and propagating the harmful “we don’t need memory safety, we have Coverity™” meme.

                                                                curl is currently one of the most distributed and most widely used software components in the universe, be it open or proprietary and there are easily way over three billion instances of it running in appliances, servers, computers and devices across the globe. Right now. In your phone. In your car. In your TV. In your computer. Etc.

                                                                So in other words, any security vulnerabilities that affect curl will have wide impact. How does this support his argument?

                                                                1. 12

                                                                  (did anyone actually expect him to rewrite?)

                                                                  I don’t think rewriting in a safe language would be the problem. I have ported a couple 50k line codebases, between objc, java, and cpp and the rewrite wasn’t the hard part. It’s the tooling. Even using supported languages it’s a ton of work to get the build systems to work well. Good luck getting your newish but safe language code base playing nice with the build systems for the big three platforms, consistently, fast, and with good developer support for e.g. debugging and logging and build variants.

                                                                  cURL isn’t popular because it’s written in C, per se, it’s popular because it runs freaking everywhere and very nearly just works.

                                                                  I think if you want safe language adoption you should go to the people that are choosing to use cURL and talk to them, and work on their barriers to adoption. cURL is C.

                                                                  1. 3

                                                                    He has the benefit of being able to promote the virtues of a working and widely used program versus some vaporware.

                                                                    1. 14

                                                                      I’m having trouble understanding what you’re trying to say. He has a popular project, ok, and?

                                                                      1. 7

                                                                        I think he’s saying the project owner would say the good they’ve done outweighs the bad. That’s the impression I got from OP.

                                                                        1. 7

                                                                          What he’s saying is we’re comparing a tangible C program to a Rust program that does not exist. Faults of C notwithstanding, vaporware has all the attributes of the best software—except for that existence problem.

                                                                          1. 5

                                                                            Nah, if that’s all he wanted to say he wouldn’t have made such a big fuss about how popular his software is. He shoehorned that paragraph in there in an attempt to lend credibility to his arguments. Later he tries this again by implying that his detractors don’t code, or something:

                                                                            Those who can tell us with confidence how to run our project but who don’t actually show us any code.

                                                                            All of that is bunk. If maintaining a popular piece of software implied security expertise then PHP’s magic quotes would have never seen the light of day.

                                                                            1. 1

                                                                              I’m explaining Victor’s comment, not the OP. You lost the plot.

                                                                          2. 2

                                                                            Obviously, he knows what’s best, and knows everything, duh. /satire

                                                                      1. 5

                                                                        Do the economics of their business work out? Or are they just gobbling market with investment money? For the little company I work for CF is dramatically less expensive than MaxCDN, which itself was dramatically cheaper than AWS Cloudfront.

                                                                        And aside from that, I hate their attempts to differentiate via value-adds like this site scraper shield garbage, or their DDOS shield stuff. It doesn’t work well and isn’t trustworthy. I wish there was a CDN that just took an nginx config and got out of the way.

                                                                        1. 2

                                                                          Did Google do the CPU design? Is Rockchip just doing the fabrication?

                                                                          Odd world to have Google poised to join Apple as the best mobile CPU vendors. Maybe they got sick of Qualcomm’s relatively lackluster performance.

                                                                          1. 2

                                                                            I’m not sure there’s any evidence for a Google-designed CPU; if it was happening, it’d be pretty hard to hide hiring a team of that size.

                                                                            1. 4

                                                                              Right. Looks like an ARM-designed CPU core for sure.

                                                                              Last October, a product page for the Plus, then branded the Chromebook Pro, was leaked, ID'ing the chip as the Rockchip RK3399. Some folks benchmarked a dev board with it. Some early announcements about it exist too, also tagging it as based on Cortex-A72/A53 cores and a Mali GPU.

                                                                              There’ve also benchmarks out there of another A72-based SoC, the Kirin 950.

                                                                              1. 2

                                                                                There’s reasonable evidence of Google ramping up at least more competence in chip design over the past 3-5 years than they traditionally had, which seems to spawn rumors of a Google CPU every time they hire someone. Anecdotally from the perspective of academia, they do seem much more interested in CE majors than they once were, plus a few moderately high-profile hardware folks have ended up there, which would’ve been surprising in the past. But I agree it’s nowhere near the scale to be designing their own CPU. I don’t know what they’re actually doing, but assumed it was sub-CPU-level custom parts for their data centers.

                                                                                1. 8

                                                                                  CPU design is also a really small world; it’s almost all the same people bouncing between teams. You can trace back chip designs to the lineage of the people who made them; there’s even entire categories “pet features” that basically indicate who worked on the chip.

                                                                                  1. 3

                                                                                    Pet features, that’s neat. Like ISA features or SoC/peripheral stuff? Can you give an interesting example?

                                                                                    1. 10

                                                                                      One example is the write-through L1 cache, which iirc has a rather IBM-specific heritage. It also showed up in Bulldozer (look at who was on the team for why). A lot of people consider it to be a fairly bad idea for a variety of reasons.

                                                                                      Most of these features tend to be microarchitectural decisions (e.g. RoB/RS design choices, pipeline structures, branch predictor designs, FPU structures….), the kind of things that are worked on by quite a small group, so show a lot of heritage.

                                                                                      This is probably a slightly inaccurate and quite incomplete listing of current “big core” teams out there:

                                                                                      Intel: Core team A, Core team B, and the C team (Silvermont, I think)? They might have a D team too.

                                                                                      AMD: Jaguar (“cat”) team (members ~half laid off, ~half merged into Bulldozer), not sure what happened after Bulldozer, presumably old team rolled into Zen?

                                                                                      ARM: A53 team, A72 team, A73 team (Texas I think)

                                                                                      Apple

                                                                                      Samsung (M1)

                                                                                      Qualcomm (not sure what the status of this is after the death of mobile Snapdragon, but I think it’s still a thing)

                                                                                      nvidia (not sure what the status of this one is after Denver… but I think it’s still a thing)

                                                                                      Notably when a team is laid off, they all go work for other companies, so that’s how the heritage of one chip often folds into others.