This is pretty far off-topic, and most likely to result in a bunch of yelling back and forth between True Believers.
Flagged.
EDIT:
OP didn’t even bother to link to the claimed “increasing evidence”. This is a bait thread. Please don’t.
Shrug. I find the complete lack of political awareness at most of the tech companies I’ve worked at to be rather frustrating and I welcome an occasional thread on these topics in this venue.
It’s possible that many of your coworkers are more politically aware than they let on, and deliberately avoid talking about it in the workplace in order to avoid conflict with people who they need to work with in order to continue earning money.
All work is political. “Jesus take the wheel” for your impact on the world through your employment decisions is nihilistic.
Not trumpeting all your political views in the workplace does not mean completely ignoring political incentives for employment or other decisions. I’m not sure what made you think GP is advocating that.
Obviously “off-topic-ness” is subjective, but so far your prediction re: yelling back and forth hasn’t happened. Perhaps your mental model needs updating… maybe your colleagues are better equipped to discuss broad topics politely than you previously imagined?
Obviously “off-topic-ness” is subjective, but so far your prediction re: yelling back and forth hasn’t happened.
Probably because everyone on this site is good and right-thinking — or knows well enough to keep his head down and his mouth shut.
(Which has nothing to do with the truth of either side’s beliefs; regardless of truth, why cause trouble for no gain?)
To me, the people on this site definitely handle these discussions better. Hard to say how much better given that’s subjective. Let’s try for objective criteria: there’s less flame wars, more people sticking to the facts as they see them vs comments that re pure noise, and moderation techniques usually reduce the worst stuff without censorship of erasing civil dissenters. If those metrics are sound, then Lobsters community are objectively better at political discussions than many sites.
Here are some articles for your reading.
https://www.newscientist.com/round-up/worse-climate/
Some of those articles link to other articles. You can get pretty deep if you want.
These all seem to say one thing: climate change is going to be worse faster than some other prediction said. But that does not even remotely address your claim that “organized human life might not be possible by the end of the century and possibly sooner”. What on earth makes you think you know anything about what conditions humans need to organize?
This is a good point. I guess my “evidence” would be past civilization collapse as a result of environmental destruction like what happened on Easter Island.
I would love to see research on using an AFL-style genetic algorithm based on (branch) coverage feedback for generating test cases in a QuickCheck-style property testing framework. You could do that with clang’s -fsanitize-coverage options, similar to what libFuzzer does but with type-aware input generation, shrinking, etc.
This is something I’ve been wanting to do as a side-project for very long time. Instrument the output of Elm compiler with code coverage stats in the runtime and use these stats from within the test runner for some kind of coverage-maximizing AFL-style fuzzers.
yesterday afternoon, the community reply https://www.arm-basics.com/
Eh, it’s a pretty cheap shot, morally on the same level as the original page and much weaker in content. I hope it’s not representative of the larger RISC-V project.
I’m not too impressed with the counter-FUD but I think it’s hilarious that the riscv-basics.com people didn’t think to register arm-basics.com while they were at it.
Are you confident that every single user of your systems is going to out-of-band verify that that is the correct host key?
If your production infrastructure has not solved this problem already, you should fix your infrastructure. There are multiple ways.
Your users should never see the message “the authenticity of (host) cannot be established”
Makes me wonder how Oxy actually authenticates hosts. The author hates on TOFU but mentions no alternatives AFAICS, not even those available in OpenSSH?
It only authenticates keys, and it makes key management YOUR problem. see https://github.com/oxy-secure/oxy/blob/master/protocol.txt for more details.
I.e. you have to copy over keys from the server to the client before the client can connect(and possibly the other way from the client to the server, depending on where you generate them).
We won’t do much editing for grammar or meaning;
[…]
We probably don’t need to talk about “f*cking moron”. The caps in “AND STANDARD” is another way to indicate frustration, like the “honestly” above. […] None of these carry any meaning about the technical problem; they’re just expressions of anger.
[…]
This is a much better email. It has 43% as many words, but loses none of the meaning.
Do you see the problem? You are absolutely changing the meaning of the text… except for the technical bits. While the original email expressed anger and frustration at valuing standards over reality, your take makes the technical points alright, but stops there: it conveys none of the feelings the original author had and expressed in his rant.
You allude to this omission when you say “None of these carry any meaning about the technical problem” and justify it by saying “they’re just expressions of anger.”. The assumption here is that anger and frustration are not valid feelings to express in this context, probably because you regard interactions on the LKML as part of a professional, corporate setting, and in the beginning you say:
If you insult people in professional interactions, you’ll find yourself increasingly alienated and excluded simply because people don’t like being insulted!
But that makes me wonder… we’re talking about Linus Torvalds here, who has been doing that exact thing for decades, on public mailing lists, for all the world to see, including a quarrel with a world-renowned professor when he was still a student himself. And while that does earn him some occasional backlash, I think he is hardly alienated or excluded by his collaborators; to the contrary, he fostered a community that made his little project… quite the success, one might say.
How come? I agree that insulting people in a corporate environment will usually not end well, so the answer must be that LKML was not always a corporate place, and still is not to the degree that Linus and other prominent maintainers are sticking to their ways, in spite of pressure to assimilate into the culture of the corporations that have embraced Linux development. And this, it seems to me, is at the heart of the matter: from occasionally rough but also playful “hacker culture”, where strong feelings are held and things can get emotional, has emerged something that, for various reasons, big tech firms embrace and engage in. But their cultures eschew having soul in the game and impoliteness is not tolerated, so when actors from these two cultures collaborate, sometimes attitudes will clash and sparks will fly.
Now, I don’t want to convince you that hacker culture is “right” in some way and socially tolerating some anger and insults is a good thing, except for noting, again, that it has some very successful projects to stand for it, otherwise we would not be talking about it. But what really bothers me is the cultural imperialism that I see in posts hating on Linus like yours does. As far as I can tell, you just came across a post with his mail and thought you’d bash on him some for his Bad Character. OK, that is a little unfair, because you made an effort to be constructive in your moralizing, but my point is, you were not involved in this incident, nor, as far as I can tell, any similar ones. Your only reason to engage is to promote your own culture, because it is Right and being angry is Wrong. You’re not content with letting the kernel community work things out on their own, because you know what is Right and Linus is Wrong and has to change. This galls me. At its heart this is the same attitude that led to indigenous cultures being destroyed around the globe. “We’ll show you how it’s done, and we need your land/kernel”.
Human culture and society is a deeply complex topic, and anything we think we know is probably wrong to some degree. Instead of going on crusades, however politely executed, I think we should be striving for tolerance and collaboration and work the inevitable problems out on the ground as they come. Just don’t shoot arrows across the river, please?
I always laugh when people come up with convoluted defenses for C and the effort that goes into that (even writing papers). Their attachment to this language has caused billions if not trillions worth of damages to society.
All of the defenses that I’ve seen, including this one, boil down to nonsense. Like others, the author calls for “improved C implementations”. Well, we have those already, and they’re called Rust, Swift, and, for the things C is not needed for, yes, even JavaScript is better than C (if you’re not doing systems-programming).
Their attachment to this language has caused billions if not trillions worth of damages to society.
Their attachment to a language with known but manageable defects has created trillions if not more in value for society. Don’t be absurd.
[citation needed] on the defects of memory unsafety being manageable. To a first approximation every large C/C++ codebase overfloweth with exploitable vulnerabilities, even after decades of attempting to resolve them (Windows, Linux, Firefox, Chrome, Edge, to take a few examples.)
Compared to the widely used large codebase in which language for which application that accepts and parses external data and yet has no exploitable vulnerabilities? BTW: http://cr.yp.to/qmail/guarantee.html
Your counter example is a smaller, low-featured, mail server written by a math and coding genius. I could cite Dean Karnazes doing ultramarathons on how far people can run. That doesn’t change that almost all runners would drop before 50 miles, esp before 300. Likewise with C code, citing the best of the secure coders doesn’t change what most will do or have done. I took author’s statement “to first approximation every” to mean “almost all” but not “every one.” It’s still true.
Whereas, Ada and Rust code have done a lot better on memory-safety even when non-experts are using them. Might be something to that.
I’m still asking for the non C widely used large scale system with significant parsing that has no errors.
That’s cheating saying “non-c” and “widely used.” Most of the no-error parsing systems I’ve seen use a formal grammar with autogeneration. They usually extract to Ocaml. Some also generate C just to plug into the ecosystem since it’s a C/C++-based ecosystem. It’s incidental in those cases: could be any language since the real programming is in the grammar and generator. An example of that is the parser in Mongrel server which was doing a solid job when I was following it. I’m not sure if they found vulnerabilities in it later.
At the bottom of the page you linked:
I’ve mostly given up on the standard C library. Many of its facilities, particularly stdio, seem designed to encourage bugs.
Not great support for your claim.
There was an integer overflow reported in qmail in 2005. Bernstein does not consider this a vulnerability.
That’s not what I meant by attachment. Their interest in C certainly created much value.
Their attachment to this language has caused billions if not trillions worth of damages to society.
Inflammatory much? I’m highly skeptical that the damages have reached trillions, especially when you consider what wouldn’t have been built without C.
Tony Hoare, null’s creator, regrets its invention and says that just inserting the one idea has cost billions. He mentions it in talks. It’s interesting to think that language creators even think of the mistakes they’ve made have caused billions in damages.
“I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.
If the billion dollar mistake was the null pointer, the C
getsfunction is a multi-billion dollar mistake that created the opportunity for malware and viruses to thrive.
He’s deluded. You want a billion dollar mistake: try CSP/Occam plus Hoare Logic. Null is a necessary byproduct of implementing total functions that approximate partial ones. See, for example, McCarthy in 1958 defining a LISP search function with a null return on failure. http://www.softwarepreservation.org/projects/LISP/MIT/AIM-001.pdf
“ try CSP/Occam plus Hoare Logic”
I think you meant formal verification, which is arguable. They could’ve wasted a hundred million easily on the useless stuff. Two out of three are bad examples, though.
Spin has had a ton of industrial success easily knocking out problems in protocols and hardware that are hard to find via other methods. With hardware, the defects could’ve caused recalls like the Pentium bug. Likewise, Hoare-style logic has been doing its job in Design-by-Contract which knocks time off debugging and maintenance phases. The most expensive. If anything, not using tech like this can add up to a billion dollar mistake over time.
Occam looks like it was a large waste of money, esp in the Transputer.
Note what he does not claim is that the net result of C’s continued existence is negative. Something can have massive defects and still be an improvement over the alternatives.
“especially when you consider what wouldn’t have been built without C.”
I just countered that. The language didn’t have to be built the way it was or persist that way. We could be building new stuff in a C-compatible language with many benefits of HLL’s like Smalltalk, LISP, Ada, or Rust with the legacy C getting gradually rewritten over time. If that started in the 90’s, we could have equivalent of a LISP machine for C code, OS, and browser by now.
It didn’t have to, but it was, and it was then used to create tremendous value. Although I concur with the numerous shortcomings of C, and it’s past time to move on, I also prefer the concrete over the hypothetical.
The world is a messy place, and what actually happens is more interesting (and more realistic, obviously) than what people think could have happened. There are plenty of examples of this inside and outside of engineering.
The major problem I see with this “concrete” winners-take-all mindset is that it encourages whig history which can’t distinguish the merely victorious from the inevitable. In order to learn from the past, we need to understand what alternatives were present before we can hope to discern what may have caused some to succeed and others to fail.
Imagine if someone created Car2 which crashed 10% of the time that Car did, but Car just happened to win. Sure, Car created tremendous value. Do you really think people you’re arguing with think that most systems software, which is written in C, is not extremely valuable?
It would be valuable even if C was twice as bad. Because no one is arguing about absolute value, that’s a silly thing to impute. This is about opportunity cost.
Now we can debate whether this opportunity cost is an issue. Whether C is really comparatively bad. But that’s a different discussion, one where it doesn’t matter that C created value absolutely.
C is still much more widely used than those safer alternatives, I don’t see how laughing off a fact is better than researching its causes.
Billions of lines of COBOL run mission-critical services of the top 500 companies in America. Better to research the causes of this than laughing it off. Are you ready to give up C for COBOL on mainframes or you think both of them’s popularity were caused by historical events/contexts with inertia taking over? Im in latter camp.
Are you ready to give up C for COBOL on mainframes or you think both of them’s popularity were caused by historical events/contexts with inertia taking over? Im in latter camp.
Researching the causes of something doesn’t imply taking a stance on it, if anything, taking a stance on something should hopefully imply you’ve researched it. Even with your comment I still don’t see how laughing off a fact is better than researching its causes.
You might be interested in laughing about all the cobol still in use, or in research that looks into the causes of that. I’m in the latter camp.
I think you might be confused at what I’m laughing at. If someone wrote up a paper about how we should continue to use COBOL for reasons X, Y, Z, I would laugh at that too.
Cobol has some interesting features(!) that make it very “safe”. Referring to the 85 standard:
X. No runtime stack, no stack overflow vulnerabilities
Y. No dynamic memory allocation, impossible to consume heap
Z. All memory statically allocated (see Y); no buffer overflows
We should use COBOL with contracts for transactions on the blockchains. The reasons are:
X. It’s already got compilers big businesses are willing to bet their future on.
Y. It supports decimal math instead of floating point. No real-world to fake, computer-math conversions needed.
Z. It’s been used in transaction-processing systems that have run for decades with no major downtime or financial losses disclosed to investors.
λ. It can be mathematically verified by some people who understand the letter on the left.
You can laugh. You’d still be missing out on a potentially $25+ million opportunity for IBM. Your call.
Your call.
I believe you just made it your call, Nick. $25+ million opportunity, according to you. What are you waiting for?
You’re right! I’ll pitch IBM’s senior executives on it the first chance I get. I’ll even put on a $600 suit so they know I have more business acumen than most coin pitchers. I’ll use phrases like vertical integration of the coin stack. Haha.
Ill be posting about that in a reply later tonight.
Good god man, get a blog already.
Like, seriously, do we need to pass a hat around or something? :P
Haha. Someone actually built me a prototype a while back. Makes me feel guilty that I dont have one instead of the usual lazy or overloaded.
That’s cool. Setting one up isn’t the hard part. The hard part is doing a presentable design, organizing the complex activities I do, moving my write-ups into it adding metadata, and so on. I’m still not sure how much I should worry about the design. One’s site can be considered a marketing tool for people that might offer jobs and such. I’d go into more detail but you’d tell me “that might be a better fit for Barnacles.” :P
Skip the presentable design. Dan Luu’s blog does pretty well it’s not working hard to be easy on the eyes. The rest of that stuff you can add as you go - remember, perfect is the enemy of good.
Making me feel guilty again. Nah, I’ll build it myself likely on a VPS.
And damn time has been flying. Doesnt feel like several months have passed on my end.
Well, we have those already, and they’re called Rust, Swift, ….
And D maybe too. D’s “better-c” is pretty interesting, in my mind.
If you had actually made a serious effort at understanding the article, you might have come away with an understanding of what Rust, Swift, etc. are lacking to be a better C. By laughing at it, you learned nothing.
the author calls for “improved C implementations”. Well, we have those already, and they’re called Rust, Swift
Those (and Ada, and others) don’t translate to assembly well. And they’re harder to implement than, say, C90.
Is there a reason why you believe that other languages don’t translate to assembly well?
It’s true those other languages are harder to implement, but it seems to be a moot point to me when compilers for them already exist.
Some users of C need an assembly-level understanding of what their code does. With most other languages that isn’t really achievable. It is also increasingly less possible with modern C compilers, and said users aren’t very happy about it (see various rants by Torvalds about braindamaged compilers etc.)
“Some users of C need an assembly-level understanding of what their code does.”
Which C doesnt give them due to compiler differences and effects of optimization. Aside from spotting errors, it’s why folks in safety- critical are required to check the assembly against the code. The C language is certainly closer to assembly behavior but doesnt by itself gives assembly-level understanding.
So true. Every time I use the internet, the solid engineering of the Java/Jscript components just blows me away.
Everyone prefers the smell of their own … software stack. I can only judge by what I can use now based on the merits I can measure. I don’t write new services in C, but the best operating systems are still written in it.
“but the best operating systems are still written in it.”
That’s an incidental part of history, though. People who are writing, say, a new x86 OS with a language balancing safety, maintenance, performance, and so on might not choose C. At least three chose Rust, one Ada, one SPARK, several Java, several C#, one LISP, one Haskell, one Go, and many C++. Plenty of choices being explored including languages C coders might say arent good for OS’s.
Additionally, many choosing C or C++ say it’s for existing tooling, tutorials, talent, or libraries. Those are also incidental to its history rather than advantages of its language design. Definitely worthwhile reasons to choose a language for a project but they shift the language argument itself implying they had better things in mind that werent usable yet for that project.
I think you misinterpreted what I meant. I don’t think the best operating systems are written in C because of C. I am just stating that the best current operating system I can run a website from is written in C, I’ll switch as soon as it is practical and beneficial to switch.
It’s a neat language that I hope to see more of. That said, I haven’t seen any evidence of this:
“but the argument is that the increase in productivity and reduction of friction when memory-safe mechanisms are absent more than make up for the time lost in tracking down errors, especially when good programmers tend to produce relatively few errors.”
I have seen studies show safe, high-level programming give productivity boosts. I’ve also seen a few that let programmers drop into unsafe, manual control where they want with that wrapped in a safe or high-level way. Combining these should probably be the default approach unless one can justify benefits of not doing so. Also, we’ve seen some nice case studies recently of game developers getting positive results trying Design-by-Contract and Rust. If author was right, such things would’ve slowed them down with no benefits. Similarly for the game developers that are using Nim with its controllable, low-latency GC.
It’s not mentioned in the primer but the compiler has built in support for linting. You can access to the AST during compilation and stop the build, so rather than enforcing good practice by shoehorning it into types you can enforce good practice by directly checking for misuse.
I do wonder if people will just end up badly implementing their own type systems on top of that though.
You’re making some big claims here (even a normative statement), seemingly without much to back it up.
I have seen studies show safe, high-level programming give productivity boosts.
Too general. For some tasks high-level languages are clearly desirable, for others they clearly don’t work at all. The question at hand is which level of abstraction is desirable for writing/maintaining a large, complex codebase that has to reliable mangle a huge amount of data about 60 times per second. If a study does not replicate these conditions, it is worthless for answering the question.
I’ve also seen a few that let programmers drop into unsafe, manual control where they want with that wrapped in a safe or high-level way. Combining these should probably be the default approach unless one can justify benefits of not doing so.
Does writing your engine in C(++) and slapping Lua on top for the game logic count? Many games work like that, it pretty much is the default approach; no immature, unproven languages needed.
Also, we’ve seen some nice case studies recently of game developers getting positive results trying Design-by-Contract and Rust.
Unless a serious game in Rust has actually been released and enjoyed some success, we haven’t. A lone Rust enthusiast playing around for a few month and writing a blog post about it does not tell us a whole lot about how Rust might fare under realistic conditions.
If author was right, such things would’ve slowed them down with no benefits.
What makes you so sure they haven’t?
Every time I read posts like yours I wonder if Rust evangelists and other C haters ever seriously ponder why people use and like C, why it became the giant on whose shoulders so much of modern software stands. I’m not claiming it all comes down to technical superiority, but I do think there is a reason C has stood the test of time like no other programming language in computing’s (admittedly not very long) history. And it certainly wasn’t for a lack of competition.
Edit: I was reminded of Some Were Meant for C, which should be required reading for anyone developing a new “systems” language.
“Every time I read posts like yours I wonder if Rust evangelists and other C haters ever seriously ponder why people use and like C, why it became the giant on whose shoulders so much of modern software stands.”
It’s rare I get accused of not reading enough history or CompSci papers on programming. I post them here constantly. On the contrary, it seems most C programmers didn’t study the history or design of the language since most can’t even correctly answer “what language invented the philosophy of being relatively small, efficiency focused, and the ‘programmer is in control’ allowing direct manipulation of memory?” They’ll almost always say C sine their beliefs come from stories people told them instead of historical research. Correct answer was BCPL: the predecessor that C lifted its key traits off of. Both were designed almost exclusively due to extremely-limited hardware of the time. They included just what could run and compile on weak hardware modified with arbitrary, personal preferences of the creators. Here’s a summary of a presentation that goes paper by paper studying the evolution of those designs. The presentation is Vimeo link in references. Only thing I retract so far is “not designed for portability” since a C fan got me a reference where they claimed that goal after the fact.
“You’re making some big claims here (even a normative statement), seemingly without much to back it up.”
After accusing me of doing no research, you ironically don’t know about the prior studies comparing languages on traits like these. There were many studies done in research literature. Despite the variance in them, certain things were consistent. Of the high-level languages, they all killed C on productivity, defect rate, and maintenance. Ada usually beat C++, too, except in one study I saw. Courtesy of Derek Jones, my favorite C vs Ada study is this since it’s so apples to apples. Here’s one from patrickdlogan where they do same system in 10 languages with everything that’s been mainstream beating C in productivity. Smalltalk leaves all them in the dust. The LISP studies showed similar boost in development speed. Both supporting quick iterations, working with live image/data, and great debugging help with that. LISP also has its full-strength, easy-to-run macros with their benefits. Example of benefits of LISP-based OS and Smalltalk live-editing (search for “Apple” for part about Jobs’ visit).
Hell, I don’t have to speculate to you given two people implemented C/C++ subset in a Scheme in relatively short time maintaining the benefits of both. Between that, Chez Scheme running on a Z80, and PreSceme alternative to C, that kind of shows that we’ve been able to do even what C does with more productivity, safer-by-default, easier debugging, and easier portability since somewhere from the 1980’s to mid-to-late-1990’s depending on what you desired. If a modern take, one could do something like ZL above in Racket Scheme or IDE-supported Common LISP. You’d get faster iterations and more productivity, esp extensibility, due to better design for these things. Good luck doing it with C’s macros and semantics. That combo is so difficult it just got a formal specification w/ undefined behavior a few years ago (KCC) despite folks wanting to do that for decades. The small LISP’s and Pascals had that stuff in the 1980-1990’s due to being designed for easier analysis. Other tech have been beating it on predictable concurrency such as Concurrent Pascal, Ada Ravenscar, Eiffel SCOOP, and recently Rust. Various DSL’s and parallel languages do easier parallel code. They could’ve been a DSL in the better-designed C which is approach some projects in Rust and Nim are taking due to macro support.
So, we’re not knocking it over speculative weaknesses. It was a great bit of hackery back in the early 1970’s for getting work done on a machine that was a piece of shit using borrowed tech from a machine that was a bigger piece of shit. Hundreds of person-years of investment into compiler and processor optimizations have kept it pretty fast, too. None of this changes that it has bad design relative to competitors in many ways. None of it changes that C got smashed every time it went against another language balancing productivity, defect rate, and ease of non-breaking changes. The picture is worse when you realize that people building that ecosystem could’ve built a better language that FFI’d and compiled to C as an option to get its ecosystem benefits without its problems. That means what preserves C is basically a mix of inertia, social, and economic factors having about nothing to do with its “design.” And if I wanted performance, I’d look into synthesis, superoptimizers, and DSL’s designed to take advantage of modern, semi-parallel hardware. C fails on that, too, these days that most people code that stuff in assembly by hand. Those other things have recently been outperforming them, too, in some areas.
There’s not a technical reason left for its superiority. It’s just social and economic factors at this point driving the use of A Bad Language for today’s needs. Better to build, use, and/or improve A Good Language using C only where you absolutely have to. Or want to if you simply like it for personal preference.
@nullp0tr @friendlysock I responded here since it was original article that led to re-post of the other one. I’d just be reusing a lot of same points. I also countered the other article in the original submission akkartik linked to in that thread.
It’s rare I get accused of not reading enough history or CompSci papers on programming.
And I didn’t. I’m aware of your posts and interests. Rather, I’m accusing you of reading too much and doing too little. In the real world, C dominates. Yet studies have found only disadvantages. Clearly the studies are missing something fundamental. Instead of looking for that missing something, you keep pointing at the studies, as if they contain the answer. They cannot, because their conclusions fly in the face of reality.
After accusing me of doing no research, you ironically don’t know about the prior studies comparing languages on traits like these.
I’m aware of (some of) the research. Color me unimpressed. I don’t need a study to know that C is a crappy language for application development. Nor to figure out that C has serious warts and problems all over, even for OS development.
What I would very much like to see more research on is why C keeps winning anyway. You chalk it up to “socio-economic factors” and call it a day. I call that a job not finished. It doesn’t explain why people use and like products built in C. It doesn’t explain why some very capable engineers defend C with a vigor that puts any Rust evangelist to shame[1]. It doesn’t explain what these socio-economic factors are, but they clearly matter a great deal. Otherwise, we agree, C wouldn’t be where it is today.[2]
Legacy/being well-established is certainly one factor keeping C alive, but how did it become so dominant? As a computer history buff you might argue that it is because universities and their inhabitants had cheap access to UNIX systems back in the day and so students learned C and stuck to it when they graduated. There’s a lesson in that argument: having experience in a programming language is a very important factor in how productive you will be in it, which means throwing that experience away by using a very different language is a huge loss. Those who earn their bread and butter by having and using that experience will be very reluctant to do so. Most working programmers simply can/will not afford to learn a radically different language. It is an even bigger loss to society at large than to individuals because you loose teachers, senior engineers and others who pass on experience too. There are other network/feedback effects (ignoring the obvious technical ones), but I think you get the point.
The consequence is that small incremental improvements on existing (and proven) technology are vastly preferable to top-down fundamental redesigns, even if that means the end result isn’t anywhere close to pretty. This has been proven again and again across industries and live in general. x86 is another prominent example.
And then there is Go. Here’s a language that is built by people who understand C and the need to ease adoption. And look! People are actually using it! Go has easily replaced more C code on my machines than all the strongly typed, super-duper safe languages combined and probably in the internet-at-large too. And, predictably, the Haskell enthusiasts rage and shake their fists at it, because it isn’t the panacea they imagine their favorite strongly-typed language to be. Or in other words: because they do not understand, and are unwilling to learn, the most basic aspects about what keeps C alive.
Anyway, I wanted to circle back to your original claim that memory-safety mechanisms do not inhibit productivity in game programming, but this is already pretty long and I’m hungry. Maybe I’ll write another comment to address that later.
[1] And they make good on their claims by building things that are actually successful, instead of sticking to cheap talk about the theoretical advantages and showing off their skills at doing weird stuff to type systems.
[2] That does not mean I don’t believe there are important technical aspects to C which are missing in languages intended to replace it. See the “Some were meant for C” paper.
“What I would very much like to see more research on is why C keeps winning anyway.”
It’s not “winning anyway” any more than Windows keeps winning on the desktop, .NET/Java in enterprise, PHP in web, and so on. It got the first mover advantage in its niche. It spread like wildfire via UNIX and other platforms that got huge. It was a mix of availability, hacker culture, and eventually open-source movement. Microsoft and IBM were using monopoly tactics suing or acquiring competitors that copied their stuff with different implementations or challenged them. Even companies like Borland with Pascals outperforming C on balance of productivity and speed saw the writing on the wall. Jumped on the bandwagon adding to it. The momentum and effects of several massive trends moving together who all found common ground in C at that time led to oligopolies of legacy code and decades of data locked into it. They have massive, locked-in base of code exposing C and C++. To work on it or add to it, the easiest route was learning C or C++ which increased the pool of developers even more. It’s self-reinforcing after a certain point.
I don’t have hard data on why non-UNIX groups made their choices. I know Apple had Pascal and LISP at one point. They eventually went with Objective-C for some reason with probably some C in there in the middle. IBM was using PL/S for some mainframe stuff and C for some other stuff. There’s gaps in what I can explain but each one I see is a company or group jumping on it. Then, it gets more talent, money, and compilers down the line. What explains it is Gabriel’s Worse is Better approach, herd mentality of crowds, network effects, and so on. You can’t replicate the success of C/C++ as is since it appeared at key moments in history that intersected. They won’t repeat. There will be similar opportunities where something outside a language is about to make waves where a new language can itself make waves by being central to or parasite off it. Economics, herd mentality, and network effects tell us it will work even better if it’s built on top of an existing ecosystem. A nice example is Clojure that built something pretty different on top of Java’s ecosystem. Plus all the stuff allowing seemless interoperability with C, building on Javascript in browsers, or for scripting integration with something like Apache/nginx.
So, it’s pretty clear from the lack of design to the enormous success/innovation to the stagnation of piles of code that C’s gone through some cycle of adoption driven by economics. There’s lasting lessons that successful alternatives are using. Once you know that, though, there’s not much more to learn out of necessity. Plenty for curiosity but the technical state of the art has far advanced. That’s about the best I can tell you about that part of your post after a long day. :)
“There’s a lesson in that argument: having experience in a programming language is a very important factor in how productive you will be in it, which means throwing that experience away by using a very different language is a huge loss.”
Now we’re in the “C is crap with certain benefits from its history that might still justify adoption” territory. This is essentially an economic argument saying you’ve invested time in something that will pay dividends or you don’t want to invest or have time to get a new language to that point. You see, if it’s designed right, this isn’t going to hurt you that much. The safe alternatives to C in language and libraries kept all that investment while making some stuff safe-by-default. There’s some languages that are just easy to understand but compile to C. Then, there’s those that are really different that still challenge your claim. The apples-to-apples study I linked on Ada and C specifically was concerned about their C developers doing worse in Ada due to a lack of experience. However, those developers did better since the language basically stopped a lot of problems that even experienced developers kept hitting in C since it couldn’t catch them. At least for that case, this concern was refuted even with a safe, systems language with a steep learning curve.
So, it’s not always true. It is worth considering carefully, though. Do note that my current advice for C alternatives is keeping close enough to C to capitalize on existing understanding, code, and compilers.
“Most working programmers simply can/will not afford to learn a radically different language.”
That’s a hypothesis that’s not proven. If anything, I think the evidence is strongly against you with programmers learning new languages and frameworks all the time to keep their skills relevant. Many are responding positively to the new groups of productive, safe languages such as Go, Rust, and Nim. Those fighting the borrow-checker in Rust about to quit usually chill when I tell them they can just use reference counting on the hard stuff till they figure it out. It’s a fast, safe alternative to Go at that point. If they want, they can also turn off the safety to basically make it a high-level, C alternative. There’s knobs to turn to reduce difficulty of using new, possibly-better tooling.
“People are actually using it! “
Go was a language designed by a multi-billion dollar corporation whose team had famous people. The corporation then pushed it strongly. Then, there was adoption. This model previously gave us humongous ecosystems for Java and then C#/.NET. They even gave Java a C-like syntax to increase adoption rate. Go also ironically was based on the simple, safe, GC’d approach of Niklaus Wirth that Pike experienced in Oberon-2. That was a language philosophy C users fought up to that point. Took a celebrity, a big company, and strong focus on tooling to get people to try what was essentially a C-like Oberon or ALGOL68. So, lasting lessons are getting famous people involved, have big companies with platform monopolies pushing it, make it look kind of like things they like, and strong investments in tooling as always.
“Anyway, I wanted to circle back to your original claim that memory-safety mechanisms do not inhibit productivity in game programming, but this is already pretty long and I’m hungry. Maybe I’ll write another comment to address that later.”
I’m interested in that. Remember, though, that I’m fine with being practical in an environment where high, resource efficiency takes priority over everything else. In that case, the approach would be safe-by-default with contracts for stuff like range checks or other preconditions. Tests generated from those contracts with automatic checks showing you the failures. If performance testing showed it too slow, then the checks in the fast path can be removed where necessary to speed it up. Throw more automated analysis, testing, or fuzzing at those parts to make up for it. If it’s a GC, there’s low-latency and real-time designs one might consider. My limited experience doing games a long time ago taught me memory pools help in some places. Hell, regions used in some safe languages sound a lot like them.
So, I’m advocating you use what safety you can by default dialing it down only as necessary to meet your critical requirements. The end result might be way better than, slightly better than, or equivalent to C. It might be worse if it’s some combo of a DSL and assembly for hardware acceleration which simply can have bugs just due to immaturity. Your very own example of Go plus recent ones with Rust show a lot of high-performance, latency-sensitive apps can go safe by default picking unsafety carefully.
I don’t understand the author’s objection to Outreachy. As far as I can tell, they want to fund some interns from marginalized groups so that they can work on open-source. They are not preventing the author from working on open-source. They are not preventing the author from funding interns he approves of from working on open-source. What is the problem?
Outreachy funds members of specific minority groups and would not fund a cisgender white guy’s internship. He decries this as discrimination.
On this topic, the term discrimination has differing interpretations and it’s very easy for folks to talk past each other when it comes up. It sounds he’s using it in a way that means disfavoring people based on the sex or race they belong to. Another popular definition is that it only applies to actions taken against groups that have been historically discriminated against. This use gets really strong pushback from people who disagree with the aims or means of projects like Outreachy as begging the question, making an assumption that precludes meaningful discussion of related issues.
It’s not only that Outreachy would not fund a cisgender white guy’s internship. Outreachy also would not fund Asian minority’s internship. Asian minority is a group that has been historically discriminated against. Outreachy is discriminating against specific minority. In summary, Outreachy is simply discriminating, it is not using alternative definition of discrimination.
(Might be relevant: I am Asian.)
I asked Karen Sandler. This is the reason for the selection of groups:
<karenesq> JordiGH: I saw the lobsters thread. the expansion within the US to the non-gender related criteria was based on the publication by multiple tech companies of their own diversity statistics. We just expanded our criteria to the groups who were by far the least represented.
Thanks a lot for clarifying this with Karen Sandler!
I think this proves beyond any shade of doubt that Outreachy is concerned with not historical injustice, but present disparity.
He had a pretty fair description of where the disputes were coming from. Far as what you’re saying on Outreachy, the Asian part still fits into it as even cultural diversity classes I’ve seen say the stereotypes around Asians are positive for stuff like being smart or educated. Overly positive to the point that suicide due to pressure to achieve was a bit higher according to those sources. There’s lots of Asians brought into tech sector due to a mix of stereotypes and H1-B. The commonness of white males and Asians in software development might be why they were excluded with the white males. That makes sense to me if I look at it through the view they likely have of who is privileged in tech.
Yes, it makes sense that way, but it does not make sense in “historical discrimination” sense pushcx argued. I believe this is an evidence that these organizations are concerned with the present disparity, not with the history. Therefore, I believe they should cease to (dishonestly, I think) argue history argument.
Well, if you were a woman or identified as one they would accept you, regardless if you were Asian or not. I do wonder why they picked to outreach to the particular groups they picked.
And you have to pick some groups. If you pick none/all, then you’re not doing anything different than GSoC, and there already is a GSoC, so there would be no point for Outreachy.
You can pick groups that have been historically discriminated against, as pushcx suggested. Outreachy chose otherwise.
To nitpick, I was talking about the term “discrimination” because I’ve seen it as a source of people talking past each other, not advocating for an action or even a particular definition of the term. Advocating my politics would’ve compromised my ability to effectively moderate, though incorrect assumptions were still made about the politics of the post I removed and that I did so out of disagreement, so… shrug
I think the author’s point is that offering an internship for only specific groups is discrimination. From a certain point of view, I understand how people see it that way. I also understand how it’s seen as fair. Whether that’s really discrimination or not is up for debate.
What’s not up for debate is that companies or people should be able to give their money however they feel like it. It’s their money. If a company wants to only give their money to Black Africans from Phuthaditjhaba, that’s their choice! Fine by me!
Edit: trying to make it clear I don’t want to debate, but make the money point.
It is discrimination, that’s what discrimination means. But that doesn’t automatically make it unfair or net wrong.
The alternative is inclusive supply plus random selection. You identify the various groups that exist. Go out of your way to bring in potential candidates of a certain number in each one. The selection process is blind. Whoever is selected gets the help. Maybe auditable process on top of that. This is a fair process that boosts minorities on average to whatever ratio you’re doing the invite. It helps whites and males, too.
That’s the kind of thing I push. Plus, different ways to improve the blindness of the evaluation processes. That is worth a lot of research given how much politics factors into performance evaluations in workplaces. It affects everyone but minority members even more per the data. Those methods, an equal pull among various categories, and blind select are about as fair as it gets. Although I don’t know exact methods, I did see GapJumpers describing something that sounds closer to this with positive results. So, the less-discriminating way of correcting imbalances still achieves that goal. The others aren’t strictly necessary.
The next scenario is specific categories getting pulled in more than everyone with organizations helping people in the other ones exclusively to boost them. That’s what’s going on here. Given the circumstances, I’m not going to knock them even if not as fair as other method. They’re still helping. It looks less discriminatory if one views it at a high level where each group addresses those they’re biased for. I did want to show the alternative since it rarely gets mentioned, though.
I really agree with this. I was with a company who did a teenage code academy. I have a masters, and did a lot of work tutoring undergrads and really want to get back into teaching/academia.
I wanted to teach, but was actually pushed down the list because they wanted to give teaching positions to female staff first. I was told I could take a support role. The company also did a lot of promotion specifically to all girls schools and to try to pull women in. They had males in the classes too, but the promotion was pretty bias.
Also I want to point out that I had a stronger teaching background/qualifications than some of the other people put in those positions.
I’m for fairness and giving people opportunity, but I feel as if efforts to stop discrimination just lead to more discrimination. The thing is, we’re scientists and engineers. We know the maths. We can come up with better ways to pull in good random distributions of minorities/non-minorities and don’t have to resort to workshops that promote just another equal but opposite mono-culture. If anything you do potential developers a disservice by having workshops that are only women instead of half-and-half. You get a really one sided narrative.
I appreciate you sharing that example. It mirrors some that have happened to me. Your case is a good example of sexism against a man that might be more qualified than a women being hired based on gender. I’ll also note that so-called “token hires” are often treated poorly once they get in. I’ve seen small organizations where that’s not true since the leadership just really believed in being good to people and bringing in different folks. They’re rare. Most seem to be environments people won’t want to be in since conflict or resentment increases.
In your case and most of those, random + blind selection might have solved the problem over time without further discrimination or resentment. If process is auditable, everyone knows the race or gender part gave everyone a fair shot. From there, it was performance. That’s a meaningful improvement to me in reducing the negative effects that can kick in when correcting imbalances. What I will say, though, is I don’t think we can always do this since performance in some jobs is highly face-to-face, based on how groups perceive the performer, etc. I’m still uncertain if something other than quotas can help with those.
Most jobs I see people apply for can be measured, though. If it can be measured, it can sometimes already be blinded or may be measured blindly if we develop techniques for that.
I agree with these comments, plus, thanks for sharing a real life example. We are definitely fighting discrimination with more discrimination doing things the current way. For a bit I’ve thought that a blind evaluation process would be best. It may not be perfect, but it seems like a step in a better direction. It’s encouraging to see other people talking about it.
One other thought- I think we as society are handling race, gender, age, etc problems wrong. Often, it’s how a certain group ‘A’ has persecuted another group ‘B’. However, this isn’t really fair for the people in group ‘A’ that having nothing to do with what the other people are doing. Because they share the same gender/race/whatever, they are lumped in. Part of this seems to be human nature, and it’s not always wrong. But maybe fighting these battles in more specific cases would help.
I think the problem here is that whites and males don’t need extra help. They already get enough help from their position in society. Sure, equal distribution sounds great, but adding an equal amount to everyone doesn’t make them equal; it doesn’t nullify the discrepancy that was there before. Is it good to do so? Yes, of course, but it would be better served and better for society to focus on helping those without built-in privilege to counteract the advantage that white males have.
There are lots of people in bad situations who are white and male. Saying someones race and gender determines how much help someone has had in life seems both racist and sexist.
I’m not saying that it applies in all circumstances. But I am saying that they have a much larger support structure available to them, even if they didn’t get started on the same footing as other examples.
It’s not directly because of their race and sex, it’s because of their privilege. That’s the fundamental difference.
I don’t even know how much it matters if it was true. Especially in rural or poor areas of white people. Their support structure is usually some close friends, family, people they live with, and so on. Often food stamps, too. Their transportation or Internet might be unreliable. Few jobs close to them. They have to pack up and leave putting themselves or their family into the unknown with about no money to save for both the move and higher cost of living many areas with more jobs will entail. Lots of drug abuse and suicide among these groups relative to whites in general. Most just hope they get a decent job where management isn’t too abusive and the lowish wages cover the bills. Then, you talk about how they have “a much larger support structure available to them” “because of their privilege.” They’d just stare at you blinking wondering what you’re talking about.
Put Your Solutions Where Your Ideology Is
Since you talk about advantages of privilege and support structures, I’m curious what you’d recommend to a few laypeople in my white family who will work, have basic to good people skills, and are non-technical. They each have a job in area where there aren’t lots of good jobs. They make enough money to make rent. I often have trouble contacting them because they “have no minutes” on their phones. The areas they’re in have no wired Internet directly to renters (i.e. pay extra for crap), satellite, spotty connections, or they can’t afford it. Some have transportation, others lost theirs as it died with four digit repairs eclipsing 1-2 digits of surplus money. All their bosses exploit them to whatever extent possible. All the bosses underschedule them where the work couldn’t get done then try to work them to death to do it. The schedules they demand are horrible with at least two of us having schedules that shift anywhere from morning to evening to graveyard shift in mid-week. It kills people slowly over time. Meanwhile, mentally drains them in a way that prevents them learning deep stuff that could get them in good jobs. Most of them and their friends feel like zombies due to scheduling with them just watching TV, chilling with friends/family, or something otherwise comfortable on off days. This is more prevalent as companies like Khronos push their optimizations into big businesses with smaller ones following suit. Although not among current family now, many of them in the past worked 2-3 jobs with about no time to sleep or have fun just to survive. Gets worse when they have an infant or kids.
This is the kind of stuff common among poor and working classes throughout America, including white people. Is this the average situation of you, your friends, and/or most white males or females you know of? These people “don’t need help?” I’m stretching my brain to try to figure out how what you’re saying fits their situation. In my view, they don’t have help so much as an endless supply of obstacles ranging from not affording bills to their evil bosses whose references they may depend on to police or government punishing them with utility bill-sized tickets for being poor. What is your specific recommendation for white people without any surplus of money, spotty Internet, unreliable transportation, and heavily-disrupted sleep?
Think quickly, too, because white people in these situations aren’t allowed much time to think between their stressful jobs (often multiple) and families to attend to. Gotta come up with solutions about on instinct. Just take the few minutes of clarity a poor, white person might have to solve a problem while in the bathroom or waiting in line at a store. It’s gotta work with almost no thought, energy, savings, or credit score. What you got? I’ll pass it on to see if they think it’s hopeful or contributes to the entertainment for the day. Hope and entertainment is about the most I can give to the person I’m visiting Saturday since their “privilege” hasn’t brought them much of anything else.
I’m not saying that it’s applicable in every situation; I am specifically talking about the tech industry. I don’t think it’s about prejudice in this case. I think it’s about fixing the tech culture, which white males have an advantage in, regardless of their economic background. White males don’t always have privilege, that would be a preposterous claim. But it’s pretty lopsided in their favor.
I am specifically talking about the tech industry.
It’s probably true if narrowed to tech industry. It seems to favor white and Asian males at least in bottom roles. Gets whiter as it goes up. Unfortunately, they also discriminate more heavily on age, background, etc. They want us in there for the lower-paying stuff but block us from there in a lot of areas. It’s why I recommend young people considering tech avoid it if they’re worried about age discrimination or try to move into management at some point. Seems to reduce the risk a bit.
Your comment is a great illustration of the danger of generalizing things on the basis of racis or gender, mistakenly classifying a lot of people as “privileged”. Ideally, the goal of a charity should be to help unprivileged people in general, for whatever reason they are unprivileged, not because of their race or gender.
“It’s not directly because of their race and sex, it’s because of their privilege. That’s the fundamental difference.”
But that’s not a difference to other racist/sexist/discriminatory thinking at all. Racists generally don’t dislike black people because they’re black. They think they’re on average less intelligent, undisciplined, whatever, and that this justifies discriminating against the entirety of black people, treating individuals primarily as a product of their group membership.
You’re doing the exact same thing, only you think “white people are privileged, they don’t need extra help” instead of “black people are dumb, they shouldn’t get good jobs”. In both cases the vast individual differences are ignored in favor of the superficial criteria of group membership. That is exactly what discrimination is.
You’re right in that I did assume most white males are well off, and it is a good point that they need help too. However, I still think that the ideas of diversifying the tech industry are a worthy goal, and I think that having a dedicated organization that focuses on only the underrepresented groups is valuable. I just don’t think that white males have the same kind of cultural bias against them in participating in this industry that the demographics that Outreachy have, and counteracting that is Outreachy’s goal. Yes, they are excluding groups, but trying to help a demographic or collection of demographics necessarily excludes the other demographic. How could it work otherwise?
Asians are heavily overrepresented in tech. To be fair, the reason we are overrepresented in tech (as in medicine) is likely because software development (like medicine) is an endeavour that requires expertise in challenging technical knowledge to be successful, which means that (unlike Hollywood) you can’t just stick with white people because there simply aren’t enough of them available to do all the work. So Asians who were shut out of other industries (like theatre) flocked to Tech. Black men are similarly overrepresented in the NBA but unfortunately the market for pro basketball players is a bit smaller than the market for software developers.
Do they exclude Asians? I must have missed that one. I don’t think excluding that demographic is justified.
Do they exclude Asians?
Yes they do. Quoting Outreachy Eligibility Rules:
You live in the United States or you are a U.S. national or permanent resident living aboard, AND you are a person of any gender who is Black/African American, Hispanic/Latin@, Native American/American Indian, Alaska Native, Native Hawaiian, or Pacific Islander
In my opinion, this is carefully worded to exclude Asians without mentioning Asians, even going so far as mentioning Pacific Islander.
It’s a simple calculus of opprotunity. Allowing those who already have ample opprotunity (i.e. white, cis, males) into Outreachy’s funding defeats the point of specifically targeting those who don’t have as much opprotunity. It wouldn’t do anything to help balance the amount of opprotunity in the world, which is Outreachy’s end goal here.
It’s the author’s idea that they deserve opprotunity which is the problem. It’s very entitled, and it betrays that the author can’t understand that they are in a priviledged position that prevents them from receiving aid. It’s the same reason the wealthy don’t need tax cuts.
Outreachy’s end goal seems to be balancing the amount of opportunity in the world for all, except for Asian minority.
Each of us gets to choose between doing good and doing best. The x is the enemy of the y. If Outreachy settles for acting against the worst imbalance (in its view) and leaving the rest that’s just their choosing good over best.
You’re also confusing their present action with their end goals. Those who choose “best” work directly towards their end goal, but Outreachy is in the “good” camp. By picking a worst part of the problem and working on that part, they implicitly say that their current work might be done and there’ll still be work to do before reaching the end goal.
What’s not up for debate is that companies or people should be able to give their money however they feel like it.
That is debatable. But, I too think Outreachy is well within their rights.
I’m not going to complain about discrimination in that organization since they’re a focused group helping people. It’s debatable whether it should be done differently. I’m glad they’re helping people. I will note that what you just said applies to minority members, too. Quick example.
While doing mass-market, customer service (First World slavery), I ran an experiment treating everyone in a slightly-positive way with no differences in speech or action based on common events instead of treating them way better than they deserved like we normally did. I operated off a script rotating lines so it wasn’t obvious what I was doing. I did this with different customers in new environment for months. Rather than appreciation, I got more claims of racism, sexism, and ageism then than I ever did at that company. It was clear they didn’t know what equal treatment or meritocracy felt like. So many individuals or companies must have spoiled them that experiencing equality once made them “know” people they interacted with were racist, sexist, etc. There were irritated people among white males but they just demanded better service based on brand. This happened with coworkers in some environments, too, when I came in not being overly selfless. The whites and males just considered me slightly selfish trading favors where a number of non-whites or women suspected it was because they were (insert category here). They stopped thinking that after I started treating them better than other people did and doing more of the work myself. So, it was only “equal” when the white male was doing more of the work, giving more service in one-way relationships, etc.
I’d love to see a larger study done on that kind of thing to remove any personal or local biases that might have been going on. My current guess is that their beliefs about what racism or sexism are shifted their perceptions to mis-label the events. Unlike me, they clearly don’t go out of their way to look for more possibilities for such things. I can tell you they often did in the general case for other topics. They were smart or open-minded people. Enter politics or religion, the mind becomes more narrow showing people what they want to see. I spent most of my life in that same mental trap. It’s a constant fight to re-examine those beliefs looking at life experiences in different ways.
So, I’m skeptical when minority members tell me something was about their status because I’ve personally witnessed them miscategorizing so many situations. They did it by default actually any time they encountered provable equality or meritocracy. Truth told, though, most things do mix forms of politics and merit leaning toward politics. I saw them react to a lot of that, too. I’m still skeptical since those situations usually have more political biases going on than just race or gender. I can’t tell without being there or seeing some data eliminating variables what caused whatever they tell me.
You got jokes lol. :) More like I’m collecting this data on many views from each group to test my hypotheses whereas many of my opponents are suppressing alternative views in data collection, in interpretation, and in enforcement. Actually, it seems to be default on all sides to do something like that. Any moderate listening closely to those that disagree looking for evidence of their points is an outlier. Something wrong with that at a fundamental level.
So, I then brought in my anecdotes to illustrate it given I never see them in opponents’ data or models. They might be wrong with their anecdotes right. I just think their model should include the dissent in their arguments along with reasons it does or doesn’t matter. The existence of dissent by non-haters in minority categories should be a real thing that’s considered.
I think that the information asymmetry that you had with your anecdotes affected some of the reactions you got. For one, if someone considers your actions negative in some way, they are conditioned by society to assume that you were being prejudiced. If your workplace was one that had more of a negative connotation (perhaps a debt collection service or what have you) that goes double. That’s a reason for the percieved negativity that your white male colleagues didn’t even have to consider, and they concluded that you were just being moderately nice. Notice that you didn’t have to be specifically discriminatory, nor was it necessarily fair. It’s just one more negative thing that happens because prejudice does exist. I would imagine that you would not have so many negative reactions if you explained exactly what you were doing vis-a-vis the randomization of greetings and such. I think I would discount percieved discrimination if someone did that to me.
Yes, it’s a ludicrous hissy fit. Especially considering that LLVM began at UIUC which, like many (most? all?) universities, has scholarships which are only awarded to members of underrepresented groups–so he’d have never joined the project in the first place if this were truly a principled stand and not just an excuse to whine about “the social injustice movement.” (I bet this guy thinks it’s really clever to spell Microsoft with a $, too.)
The point is a bit bluntly made, but it’s for a reason. There’s a certain kind of internet posting style which uses techniques like changing “social justice movement” to “social injustice movement” to frame the author’s point of view. Once upon a time “Micro$oft” was common in this posting style.
For extreme cases of this, see RMS’ writing (Kindle=Swindle, etc).
(The problem with these techniques, IMO, is that they’re never as clever and convincing as the person writing them thinks that they are. Maybe they appeal to some people who already agree with that point of view, but they can turn off anyone else…)
I think there is a difference here. “Microsoft” is not framing any point of view. “social justice movement”, on the other hand, is already framing certain point of view. I think “social injustice movement” is an acceptable alternative to “so-called social justice movement”, because prefixing “so-called” every time is inconvenient.
“Teach Ansible to talk to Github on your behalf” enables all your servers to establish arbitrary SSH connections using the private key(s) in your agent. That’s pretty terrible unless you explicitly and consciously decide to have zero isolation between your hosts. Similarly for storing all secrets in one file: you’re sharing all the secrets with all the hosts.
If the author does think that’s fine, then I think the article should at least clearly state the implications, as certainly not everyone agrees. I, for example, manage a bunch of servers with ansible where maybe a dozen people have root access. I don’t want those people to be able to SSH anywhere with my keys.
As for “Add Github to known_hosts properly and securely”, doing the keyscan on your laptop does nothing to prevent MITM attacks, not on first use nor on subsequent executions of the task. It will just write whatever ssh-keyscan returns this time into the known_hosts. The bit about having to write another play for updating seems wrong to me. Since talking to Github via ssh would blow up badly if Github ever changed their hostkey, I think hardcoding it is a fine solution. (If we ignore the general inadequatenes of transport security for authenticating code you’re about to execute, but that’s a different discussion.)
I agree with this. I thought the points were overall pretty good, especially the variable handling, vagrant setup, and error handling. All points I came to through some hard experience. I don’t think a lot of this is easy to understand when getting started and I like to see articles like this.
But the SSH connections are pretty important and it involves the security engineering of the application architecture. I do some not-great-things as well but they are all an element of the politics behind the infrastructure rather than designing a secure system. Telling the difference is not obvious.
I’d take the points of SSH with a grain of salt and not architect like this unless you are sure it has to be done. I understand it is a trade-off. A good self-test is to name the trade-offs and discuss it with you team.
This makes me appreciate zero values in Go. Instead of having to write a builder, I’d just declare a variable and set fields on it. I realize Rust insists on explicit initialization, but you could at least approximate that here: just write a function that returns a struct and set fields on that. What’s the advantage of the with_whatever methods?
If this is your use case, Rust has a protocol around the Default trait and field initialization syntax.
#[derive(Default)]
struct Foo {
field1: u32,
field2: u32
}
fn main() {
let foo = Foo {
field1: 1,
.. Foo::default()
};
}
Alternatively, you can implement Default on that.
impl Default for Foo {
fn default() -> Foo {
Foo { field1: 3, field2: 2 }
}
}
The advantage of Builders is that they are lazy and can be passed around. So, for example, I can have a library that pre-builds requests in a certain fashion and then hand them off to a user defined function that sets additional data.
Struct fields in Rust are private by default, so having it leave scope might lead to people not being allowed to assign those fields.
Additionally, if forgot this: Rust is a generic language and patterns like the following aren’t uncommon.
fn with_path<P: AsRef<Path>>(&mut self, pathlike: P) {
....
}
Which means that the method takes anything that can be turned into a (filesystem)-Path. A string, a path, an extensible pathbuffer, etc.
It enables you strictly more things to do. Yes, at the cost of some verbosity, which you can avoid in simple cases.
A TBuilder doesn’t type check as a valid T object. This is the real value of the builder pattern (in Rust and Go and Java and…Haskell): one can write a fairly strict definition for T, and wherever one has a function that accepts a T, one can be sure that the T is fully constructed and valid to use. The TBuilder is there for your CLI parser, web API, or chatbot to use, while stitching together a full object from defaults+input, or some other combination of sources.
Distinguishing between T and TBuilder prevents a partial object from masquerading as a full object.
This is an orthogonal thing for the most part. For example, sometimes I use builders in Go when the initialization logic is more complicated.
His stance is laid out more clearly later in the thread.
People should basically always feel like they can update their kernel and simply not have to worry about it.
I refuse to introduce “you can only update the kernel if you also update that other program” kind of limitations. If the kernel used to work for you, the rule is that it continues to work for you.
And I seriously will refuse to take code from people who do not understand and honor this very simple rule.
Also relevant is John Johansen’s response.
What a difference between his first post and this one. In the first one he comes off like a colossally toxic asshat. I know this is no surprise to anyway, but still. That kind of behavior is not OK. Period.
This post on the other hand is clear headed and explanatory. It lays out the rules and why it’s important to follow them.
Maybe Linus just needs a 1h send buffer? :)
“That behavior is not OK” is equivalent to “I am offended”, for this case.
For all types of behavior, you can always find someone that thinks it is not OK. Should it matter? It would be severly limiting for everyone on a place like the Internet.
It’s not “I am offended”, but rather probably 95% of people would be offended if they would hear something like this headed their way. Linus probably forgot how it’s like to hear this level of toxic communication because nobody speaks with him like that. I know his “ideology” behind his behavior (he talked about this several times), but honestly saying such “sh**” to people is low, and most people are above that, that’s why he stands out.
Personally this power relationship is why I’m against BDFLs once a project reaches a certain size.
I agree in principle. In practice I have to wonder - what are the alternatives? Design by committee has some well known flaws :)
Toxic means that it is in some way damaging to a relationship between two individuals, groups, etc. In this case it is indeed toxic because it seeks to gain in some goal at the cost of the relationship with the submitters. Toxic isn’t strictly bad, sometimes a goal is so important that you need to break the relationship, however you should always choose the least toxic strategy that will ensure success. After all who knows when you’re going to need those people’s help in the future.
In summary, dark_grimoire seems to have a correct understanding of toxic, and mytrile does not which I assume is why they are being downvoted.
It would be severly limiting
It’s already limiting though – many people silently stop contributing when they receive messages like this or never consider contributing in the first place. This means the negative impact is hidden. Since it’s hidden, it becomes much easier to defend the status quo when an alternative might result in a better kernel.
By the same logic, the positive impact is also hidden. Because it is conceivable that without these messages, the kernel might have imploded upon itself, and the prevention of said implosion is doubtlessly positive.
If you are going to argue with hidden stuff then it goes both ways.
Do you really believe that it’s not possible to enforce rules and maintain high standards without calling people idiots, their contributions garbage, and so on?
I can certainly believe the parent comment, as it’s something I hear regularly, from people who decide not to get involved in projects/make further contributions/pursue opportunities at companies/etc because of things like this. FWIW, one of my friends can be found in the kernel CREDITS, and decided to walk away because of the LKML.
it is conceivable that without these messages, the kernel might have imploded upon itself
As a counterpoint, I’ve worked on a project that has a similar code size, customer reach, and zero-tolerance stance on security and stability bugs as the Linux kernel: Chromium. Chromium does not have anywhere near the level of abusive discourse on its mailing list as the LKML, and it has not imploded on itself as you have suggested. So the burden of proof is on the abusive language to show it is needed and not the other way around.
I disagree. I am not offended by his behavior, I find it to be unacceptable by virtue of the fact that I feel human beings should treat each other with a modicum of respect. Linus’s communications very often do not meet that standard. Hence from my book they do not represent an acceptable way to treat people, especially people volunteering to donate time to an open source project.
My recollection is that Quickcheck has code to generate minimalist test cases from the input that goes awry, which is a cool feature compared to simply throwing random data at a function.
QuickCheck generates from the types of the inputs to a function. Fuzzing is for anything that takes user input… so I guess it’s really any function taking that string of bytes.
Ok, but once QuickCheck finds a problem, it tries to generate an example that’s as small as possible, which is kind of cool:
https://stackoverflow.com/questions/16968549/what-is-a-shrink-with-regard-to-haskells-quickcheck
Haskell’s quickcheck does that. The Erlang variants use combinators that you can write and compose, and let you guide the distribution of inputs you want to have rather than just taking ‘types’ in there.
You can then, for example, decide that rather than sending any string, you’re going to take strings that contain 20% emoji, 5% ASCII, 10% sequences that include combining characters, and the rest is taken in linebreaks, escape sequences, and quotes.
This turns out to give you an approach that while definitely reminiscent of fuzzing, sits a bit closer to regular tests in terms of how you approach system design (you can even TDD with properties), whereas I’m more familiar with traditional fuzzers being used as a means of validation.
QuickCheck generates from the types of the inputs to a function
QuickCheck can generate input based on the types: it has a typeclass called Arbitrary, which provides an arbitrary function that we can think of as a “generic” random generator for those types which implement it (this typeclass is also where shrink is defined).
We can also write completely standalone generators when we want something more specific, like evenGen :: Gen Int which only generates even Ints, and we can use these in properties using the forAll function, e.g. forAll evenGen myProperty.
There are a two other things to consider as well:
Properties can have preconditions, which are implemented using rejection sampling. For example myProperty n = isEven n ==> foo will only evaluate foo if isEven n is True. If we generate an n which isn’t even, the test is skipped. If too many tests get skipped, QuickSpec tells us. We could achieve a similar thing with boolean logic, e.g. myProperty n = not (isEven n) || foo, but in this case we’re replacing skips with passes, which might give us false confidence in the results (e.g. we might get 100% of tests passing, but never actually generate an input which passes the precondition)
We can use newtype to give a different name and Arbitrary instance to existing types. QuickCheck comes with a NonEmpty alias for lists, NonZero aliases for numbers, etc. The important difference between using a newtype and using a normal function (like evenGen) is that we can ensure some invariant when shrinking: e.g. shrinking an even number shouldn’t spit give us odd numbers.
So do many coverage-guided fuzzers, e.g. afl-tmin. Doesn’t really have much to do with the way the fault was discovered in the first place.
I couldn’t disagree more. I believe entities like grsec pushing kernel security forward in any way is valuable, whether or not Linus or your or anyone agrees or disagrees with what or how they’re doing it. Having these other players looking out for and challenging the status quo of system security seems valuable. Having them become closed source is weird and unfortunate, but I certainly don’t want them to go away.
I agree with you in principle, except they seem to be operating in bad faith here with their subscription terms. A positive outcome for this would be for them to respect the GPLv2 and stay in business with a reformed business plan.
These days there is the KSPP and I’m much more comfortable with the way they are operating. Obviously many of their patches are based on work by PaX/Grsecurity (at least for the time being), but unlike Brad Spengler et al. they are much more transparent about their work, don’t throw a public temper tantrum every two months, acknowledge that things other than security might matter too and actually cooperatively work with upstream instead of acting like everybody secretly hates them.
PaX/Grsecurity have brought great innovations and I think there is room for more radical security improvements that aren’t bound by the strict compatibility policy of mainline Linux, but Grsecurity is being way too hostile to make me trust the security of my machines to them. I’m also not convinced their technical process is sound and up to modern standards, but who knows what they are really doing?
I don’t think that “shell script in go” is doing what its author intended. Output only returns what is written to stdout, but in case of failure the interesting information is probably written to stderr. The docs for Output() state that it populates ExitError.Stderr, so that could be used.
(I didn’t actually verify this as it probably requires setting up kubernetes.)
Neat. I didn’t think that Lobsters was at risk for account hijacking. I would’ve thought that a strong enough password would be enough. Is it your feeling @jcs that 2FA is becoming the norm for webapps, to be a normal expected thing? I’ve always thought it only necessary on higher-risk services.
Perhaps I should be rethinking both of those assumptions - is Lobsters popular enough to be targeted? Should I use 2FA everywhere, not just what I think is high risk?
You addressed your question to @jcs, but it’s certainly my feeling that it’s nice to have a norm of 2FA everywhere.
A lot of financial institutions still don’t offer it, or offer it only with significant limitations - for example, only via SMS, or without support for app-specific passwords for services such as Mint and Quicken. Of course it’s usually possible to choose ones which do, but it’s often necessary to compromise on it for other reasons…
The stragglers are going to hold out as long as they can until there’s resounding public outcry. If we can create a culture where of course every site has a robust 2FA implementation, they’ll face a lot more questions.
Besides, personally, my social media profiles are important to me. I could create new ones if they were compromised, but I’d lose my username and history, and my followers on sites where that’s relevant. And there’d be the stress of having to spend a couple days dealing with it - making sure nobody was using it to do a “grandmother scam” on people I know, and that sort of thing.
Nothing beats the utter batshit insanity of the 2FA (or is it 3FA?) system used by my bank.
You get a physical device that accepts an ATM card (but only an ATM card, it doesn’t work with Visa/Mastercard so I have to always have with me that stupid ATM card).
You enter a password, some epilepsy-inducing gif (yes, gif) appears on the screen. I am not joking when I say it’s epilepsy-inducing, it flickers like hell.
Next, you plug your ATM card into the device, and enter some PIN on the device. Then, with the card still in the device, you physically put the device on your computer screen. There are some diodes or something on the back of the device, that read the data contained in the gif optically in order to implement a challenge-response protocol.
It’s no mystery that this works very, very poorly, as the gif is rendered in different sizes on different computers, and while you can resize the gif, it’s very hard to get the size right. It’s very difficult to get this working on a smartphone.
After the device has read the data from your screen, it prints out a TAN which you then type in your browser. If the device has read the optical data wrong, it still generates a TAN, but one that won’t work. Once you enter a wrong TAN three times, you’re fucked, your account is locked and you have to come to the bank in person to unlock it.
Even the bank employees think this system is bullshit and they recommend the alternative which is SMS. Ignoring the dubious security of SMS-based 2FA, I simply can’t use it as I travel internationally a lot, and I never get the damn SMS.
I have started using optical ChipTAN a while ago and what you’re describing sounds quite similar. While the gif is a hack and I still feel stupid holding the reader to my screen, it is my favorite system for authenticating bank transfers so far.
I like that it is completely independent of the computer’s software stack and requires no additional software or configuration. I only had to configure the size once per screen and it has worked flawlessly ever since. This also has the advantage that there is no two-way communication and not a lot of trusted code, no USB stack etc. on the device. (Which doesn’t mean I’m confident there are no critical bugs in the reading of the flicker code.)
Also, ChipTAN devices should display and make you confirm at least the receiver IBAN and amount of the transaction before generating the TAN, which should help with corrupted readings. The idea is that you can compare the information to a source which is not controlled by the device generating the transaction, but obviously that’s not always possible, e.g. when the recipient is a contact I have stored in my bank’s web interface.
That is impressively terrible. UK banks seem to have mostly settled on ATM card readers that you type the challenge into & get a response to type back into the web browser. Doesn’t protect you against a full MITM attack of course, but it”s a lot better than some of the alternatives I’ve seen.
Ah, yeah, since the optical readout doesn’t work very well you have the option of typing the challenge manually. The problem is that the challenge is 60 hex chars long, and the device only has a 10-digit keyboard… At least Nokia 3310 had T9! (not that it would have helped with random strings…)
UK banks appear to believe that an 8 digit challenge & 8 digit response are sufficient to prove ownership of the physical 2nd factor authenticator. (The chip in the ATM card). 60 hex chars is massively overkill by comparison!
See also: MAYHEM, the winner of the recent DARPA Cyber Grand Challenge, and SAGE, which has been in large-scale use at Microsoft for years (more).
Sadly neither is open-source, which makes it hard to verify their effectiveness on real-world software. Comparatively dumb coverage-guided fuzzers (AFL, LibFuzzer, syzkaller, etc.) certainly have a much bigger impact on open-source software than any tool employing symbolic execution. This might be simply because noone created a tool as streamlined and robust as AFL yet, but it might also reflect limitations of the technology itself: after all, most programs are orders of magnitude more complex than coreutils-style utilities.
I’m not so confident about evaluating ‘competing’ analysis technologies by their number of vulnerabilities produced. There is enormous flexibility in fuzzers: they are frequently run without any modification to the underlying source code, the creation of incidental machinery or even any understanding of the underlying codebase. Contrawise, in a prevailing atmosphere of intense focus on a small number of programs exhaustively analysed yet still suspected to be vulnerable (Browsers, PDF Readers), it’s hard to imagine a scenario that SAT solvers and symbolic analysis don’t play an important role.
I do understand the hesitation but the auction of hyperbole is trading in both houses – security analysts panning the state of the art while ‘Academia’ (whatever that designation might imply) prophesies a fundamental shift to symbolic or even artificially intelligent analysis.
In spite of this, I’ve personally used these tools and their siblings in the service of checking real-world code. Assisting me not only in finding vulnerabilities, but also generating several exploits (as well of plenty of ordinary bugs) in complex software like Plan9’s JPEG parser and the Erlang runtime system. Erlang has been reviewed by dozens of extremely intelligent people coding in a straightforward style, fuzzers have been run on virtually every major component along with the incidental artifacts like EPMD. Yet still, I found 4 (1 stack, 3 heap overflow) remotely exploitable vulnerabilities and all of which I can say with confidence could have never been found by a fuzzer.
It would have been impossible to comprehensively analyse even modestly complex software just as your comment suggests, but by relying on my own intuition about the weaknesses of software and the people who wrote it, along with the perseverance and extraordinary speed of my machine I could semi-exhaustively search for problems.
Yes, it was a huge hassle. The most flashy and visible symbolic execution tools turn out to either not work or be extremely special case, while the stuff that does (like KLEE) is like translating Greek. I had to learn enough about SAT solvers to get a Knuth Check (literally). I’m confident that none of this is in any way ‘essential to the art’ – All of this could be trivialized by a visionary team who decides to tackle this problem in the public domain.
Global warming is important, but realistically we can’t address it until we have regained political stability (and significantly improved on the pre-Trump status quo). Goals for the next 10 years are:
If I can make impacts on longer term issues during that time, great, but it’s hard to think about right now.
So, essentially you’re saying that since Trump was elected we are collectively incapable of doing anything but running in circles shouting about imminent fascism? Any efforts to improve technology wrt. environmental impact cannot realistically be expected to succeed, because politics? Seems like a terrible, self-defeating attitude to me.
Global warming is not a technological problem insofar as you can’t just invent a widget to solve global warming. Even if your widget is something like “planetary scale air filter”, you will not be able to build or operate it without social/political backing. Also:
It’s not a black and white issue, and it’s not going to be ‘solved’ by one major breakthrough. Their point is just that there’s no reason why the current political situation in the USA needs to bring everything to a halt. If you don’t have the time or headspace to deal with it right now, that’s absolutely okay (what matters is you’re aware of it)! Everyone’s circumstances are different, but collectively, we can’t afford to just put it on hold, and it doesn’t have to be at the expense of other important issues. If anything, I’d hope that it might have the power to bring people closer together (if a threat to humanity can’t do that, what can?).
Yes, you’re right that we can’t solve this problem with technical solutions. Other commenters notwithstanding..
What makes you think that? Climate change is in many ways a technical problem, how do you think we are going to solve it if not by adapting our technology?
Did mere technology or lobbying/sales decide what kinds of power plants will be all over many countries? Did technology itself create the disposable culture that adds to waste or did user demand? Is there a technological solution in sight for the methane emissions from cattle whose beef is in high demand? On other side, would we be storing endless amounts of data in these data centers appearing everywhere if technology didn’t make storage and computing so cheap? And is there a technological solution to avoiding them throwing that stuff away on a regular basis when customers want new stuff or manager want metrics to change? Is there a technological solution to getting people who neither care nor are legally required to care to stop doing damaging behaviors?
Sounds more like people-oriented decisions are causing most of the problem. Even if you create a beneficial technology, those people might create new practices or legislation that reduce or counter its benefits. Actually, that’s the default thing they do which they’re doing right now on a massive scale. I think we just got lucky with low-power chips/appliances since longer-lasting batteries and cheaper, utility bills are immediate benefits for most people that just happen to benefit the environment on the side.
It is obviously not merely technology that got us here. But these problems are all about technology on a fundamental level and if we want things to change, we need the tech that makes these changes viable. No point lobbying for an alternative that does not exist.
Always an interplay of technology- and people-oriented decisions. But changing technology is much easier compared to changing people, which has resulted in utter dystopia many times.
Same with well-intentioned legislation. But companies have no intrinsic incentive not to use beneficial technology, only to inflate its impact for marketing purposes (like the faked car emissions). They do have an incentive to game legislation, otherwise there would be no point to that legislation (in general; individual cases might profit from being good examples).