[Comment removed by author]
Javascript, haskell and even rust also have a bunch of these ‘features’ that need to be learnt. Its just the nature of the beast, nothing specific to C.
Using Haskell for even moderately complex systems usually requires you to use (and learn) several language extensions that are GHC-specific and can add complexity to the language. It’s not common to see a file with 6-10 language extensions.
This isn’t necessarily a bad thing. The core language has had a conservative evolution and most of the extensions that you’ll actually use are safe and well-understood. It gives the programmer the ability to customize the language, which is neat. It’s not beginner-friendly, though. This isn’t a major problem for intermediate or advanced Haskell programmers, but it puts people off to the language, especially if no one tells them that they can use :set -XLanguageExtension in ghci to bring the language extension in and explore its effects.
Rust, like C++, is going to seem impossibly baroque if you’re learning it because you have to (i.e. because you were put on a project that uses it) and don’t understand the reasons why certain decisions were made. It makes explicit a lot of the rules that are implicit in sound C++, and those just take time to learn. If you get into it because you heard that it was like Haskell, you’re going to be disappointed, because it’s designed to be much more low level.
Yep. C’s just from a different era, where there was much less of a gap between the designer and the user. Stuff like this was par for the course – software and computers in general were more arcane and it was just sort of an accepted fact of life.
I dug out C’s history in detail. The design specifics of C were done the way they were mostly because (a) author like BCPL that forced programmer to micro-manage everything; (b) they didn’t think their weak hardware could do anything better & it occasionally dictated things. BCPL was actually made due to (b), too. It wasn’t about design or arcana so much as (a) and (b). It then got too popular and fast moving to redo everything as people wanted to add stuff instead of rewrite and fix stuff.
Revisionist history followed to make nicer justifications for continuing to use that approach which was really made personal and economic reasons on ancient hardware. That simple.
I would argue, however, that if C had not been such a strong fit for a certain (rather low, compared to what most of us dow) level of abstraction, it wouldn’t have been successful. If C had been less micromanage-y, then the lingua franca for low/mid-level system programming would be some other language from the thousands that we’ve never heard of. Maybe it would be better than C, and maybe not; it’s hard to say.
Modula-2 and Edison were both safer done on same machine. Easier to parse and easy enough to compile. Just two examples from people who cared about safety in language design.
http://modula-2.info/m2r10/pmwiki.php/Spec/DesignPrinciples
Modula-2 was designed for their custom Lilith machine:
https://en.wikipedia.org/wiki/Lilith_(computer)
These developments led to the Oberon language family and operating system:
https://en.wikipedia.org/wiki/Oberon_(programming_language)
Also note that LISP 1.5 and Smalltalk-80 were ultra-powerful languages operating on weak hardware. I’m not saying they had to go with them so much as the design space for safety vs efficiency vs power tradeoffs was huge with everyone finding something better than BCPL except the pair that preferred it. ;)
EDIT: Check your Lobster messages as I put something better in the inbox.
C was less designed than organically grown over the past 40 years. Even if it was removed, you’re going to need to learn it to be able to read C.
Once you learn this, its not that big of a deal.
I think that not having a better macro syntax built into the language is just a byproduct of the fact that C fills a niche between usability and control. Speaking generally, if one were to standardize too many of these ‘shortcuts’, C may become more usable but also might become more bloated and infringe upon access to low level control. I think people want access to some low level features without being forced to use assembly.
I’m not necessarily saying that this applies to do {...} while (0) (because IMO C should offer a better way to do this), but I think there’s a need to recognize a slippery slope of making higher level/black box things part of a language geared towards granular control.
I think that not having a better macro syntax built into the language is just a byproduct of the fact that C fills a niche between usability and control.
The designers had a PDP-11 with tiny memory/CPU, optimized for space/performance, preferred tedious BCPL, and didn’t believe in high-level languages or features like code as data. Combination plus maybe backward compatibility led to the preprocessor hack. It was really that simple. What you’re posting is revisionist although probably unintentional.
[Comment removed by author]
I noticed all the people doing the more secure stuff intentionally went with a PDP-11/45. Difference must have been significant. In any case, they could’ve still done basic type-checking and such on the other one. My main counterpoint was that they could do Modula-2-style safety by default with checks turned off where necessary on per module, function, or app basis. All sorts of competing language designers did this. Hard to tell what would’ve been obvious in the past but it seems to me they could’ve seen it and just didn’t care. Personal preference.
[Comment removed by author]
Thanks for the details! I think you’re right about using earlier model to boost credibility. Due to broken memory, I can remember specifically but I know I read something along those lines in one of the historical papers.
Not really it was more bolted on from some things that were floating about bell labs at the time, the original language designers had little to do with it.
To quote dmr
| Many other changes occurred around 1972-3, but the most important was the introduction of the preprocessor, partly at the urging of Alan Snyder [Snyder 74], but also in recognition of the utility of the the file-inclusion mechanisms available in BCPL and PL/I. Its original version was exceedingly simple, and provided only included files and simple string replacements: #include and #define of parameterless macros. Soon thereafter, it was extended, mostly by Mike Lesk and then by John Reiser, to incorporate macros with arguments and conditional compilation. The preprocessor was originally considered an optional adjunct to the language itself. Indeed, for some years, it was not even invoked unless the source program contained a special signal at its beginning. This attitude persisted, and explains both the incomplete integration of the syntax of the preprocessor with the rest of the language and the imprecision of its description in early reference manuals.
Levine’s classic. Probably the first online book I have referenced in a “publication”; my high-school (A-Levels) project was an x86 disassembler. Actually, it was a database course management project, but I changed it a month before graduation, and my teacher refused to grade it; so she let me do disassembly with the tacit understanding that I wasn’t gonna get any help from her, and I was at the mercy of the outside graders. (I also changed the implementation language from Pascal to C and x86 assembly)
It was my first real program. And I bled. The x86 binary format is not for the faint of heart, at least not a non-programming teenager.
And I didn’t do well ;-)
Trivia time: John Levine is (was?) the moderator of comp.compilers in the 90s, which I read religiously. He would edit posts with his own addenda (“[I think the Dragon book has this algorithm” – John]“), etc. I didn’t realize it was an edit, so I took on to asking questions in usenet and other online fora, but if I ever had a doubt about my question, I would add ”-John" addendum at the end with my own alternate theories. To this day, my teenage alias is archived with that embarrassing signature for perpetuity.
Another trivia. One night I refused to go hang out with my teenage friends because I wanted to read up on SML/NJ, and work through some of Norman Ramsey’s “Hacker Challenges”. Ramsey was then at Harvard, IIRC, and had a page of challenges for “elite” “hackers”. Me being a “blackhat”, then, totally misunderstood the label; I spent close to a month studying SML/NJ, because one of Ramsey’s challenges was an optimizing linker for SML/NJ. I poured over Levine’s book and all sorts of publications trying to live up to this challenge. I thought writing a linker for SML/NJ would label me “elite” and give admission into exclusive IRC channels for top-criminals ;-)
That month was the last time I had casual friends for the next decade. Everyone of those boys moved on and I never noticed us growing apart. The next time I looked up from this “research”, it was 2 years later and I was by now a Unix programmer (up, or down, from an Win 9x script-kiddie.) I missed prom, home-coming, graduation, new year’s eves .. the entire millennium, and I didn’t even care. I had better things to do.
I found a new, different kind of pride. I was no longer another immigrant Somali kid “hustling” in America; I now had role models. I was better than bad-ass, I was curious. And I am grateful to this day I did!
Humm is this wise, the urban legend has always been asan has big security holes.
http://seclists.org/oss-sec/2016/q1/363
The major specific vulnerability described here seems to be specific to suid binaries, which Firefox is not.