It’s strongly typed, with a bewildering variety of types to keep straight. More errors.
As someone who occasionally programs Haskell, I would not even dare to call C a Strongly Typed language.
Moreover: The abstractions I learned because of Haskell, have substantially improved the readability and maintainability of the C-code I write, although I have to admit that not all abstractions are useful.
Code is scattered in a vast heirarchy of files. You can’t find a definition unless you already know where it is.
I usually fix this with a run of Doxygen.
It’s nice to have seen this and I agree that this article should make use think about how dirty our code often is, but I am missing a real conclusion here.
C isn’t strongly-typed, but it is wrongly-typed for application programming. The types encode the wrong things unless you’re writing an OS and “an unsigned integral value of this many bits” is all of the semantics most of your data can ever have.
For applications, most semantics don’t have anything to do with size of the data in bits. Semantics are things like “height of person” or “width of page” or something you can track back to the physical world, or, maybe, something like “course prerequisite” which isn’t physical per se but still has an existence outside of the program you’re writing. Representations of those types, like “is this height in inches or centimeters”, are important, sure, but they’re also a distraction from the code’s logic, and something computers can handle better by way of very disciplined auto-conversion, auto-conversions which preserve all relevant semantics. Ideally, the programmer would only work in heights and weights, for example, and the computer (interpreter runtime, compiled code… ) would silently and efficiently shift representations between units of measure, numeric types and strings, and integers and floats. Scheme has had a numeric tower concept for decades now, which does some of this, but that’s only one piece of the puzzle here.
Some domains require programmers to think hard about machine-level details. OS kernels are one example. Code which pushes the limits of floating-point precision is another. Code which absolutely must meet very tight performance desiderata is a third example. Those programs will be written in languages like Fortran and C and assembly for a long time to come. However, we got over writing everything in C long ago, and we have many fewer buffer overflows for it.
I don’t think anyone is denying that. Plenty of people, for one reason or another (e.g. maybe it’s all they are familiar with?), end up using the wrong tool(s) for the job.
Exactly! It’s not about a specific programming language, but it is about picking the language that makes it easiest to accomplish your goals, without sacrificing a lot of performance.
If we are talking embedded systems, resource intensive, time or performance-intensive problems then C/C++ is often the way to go.
If we are talking about regular desktop applications, then often Java should be the only way (as you’d get all the ARM and other systems for free), or C# if you are confined to MS-like platforms.
If we are talking about complex theoretical problems, use Haskell. For example: I can solve 30% of the first 100 Project Euler questions in just one single line of code, 50% in less than 5 lines of Haskell, and 70% in less than 10. 25 different assignments in a single .hs file is not uncommon.
Or Ada or Rust if balancing correctness/safety/security with performance.
Acutally no. In that case there is absolutely nothing to balance. It’s performance all the way without any compromises and any overhead is unacceptable.
Ada is what you could use in safety critical applications, allthough those are often written in Erlang or C as well.
And sorry, Go and Rust are nice, but I would not build any system on top of those languages yet. They simply have not been around for long enough and they could still disappear in no time because they still are niche languages that are barely used outside of Silicon Valley.
In fact, it’s already hard enough to find python developers as it is. Using Go or Rust would narrow the recuiting pool down to just about a hundred individuals for just about everywhere except for Silicon Valley.
You definitely can use Ada in those applications since it’s big in performance- and time-critical applications. It lets you turn the safety off as much as you need. Then, SPARK can get it back on some things without runtime checks.
Far as Rust, that’s true if you want lowest-common denlminator. Jane St’s counter-argument for Ocaml was it lets you recruit higher caliber programmers that might also be more motivated since they get to use better tools. That could apply for Rust.
Whereas, Go is a strange example because it’s designed to be quick to learn for non-programmers and even quicker for programmers. Anyone that learned huge platforms like .NET or Java can pick up Go easily. Same for anyone doing C/C++.
Far as Rust, that’s true if you want lowest-common denlminator. Jane St’s counter-argument for Ocaml was it lets you recruit higher caliber programmers that might also be more motivated since they get to use better tools. That could apply for Rust.
I’m not convinced that using a specific programming language helps you attract higher quality programmers. That’s something you’ll have to check for during the hiring phase.
Oh, Im all for collecting evidence of whether the claim is true or false in practice. I just offered it as the other possibility that comes with an unusual language.
We’re so hellbent on seeing software engineering as a ‘team’ activity to the point that we’d rather cripple the expressive capabilities of languages so as to make it easier to scale teams up. This isn’t wrong, but it isn’t necessarily right, either.
This (although I agree with the sentiment that most software is bloated, even though I think many OO languages suffer from this problem more than C). I’m also not sure what “1% the code” means.
I understand that, but the whole post is a bit vague. Shouldn’t it be “1% of the code”? Or is it used in an imperative mood? And what is the statement? “1% the code” is not a statement. I’m not sure what he’s claiming either. That he can implement C code in 1% of the line count in Forth? That obviously doesn’t hold for all C code.
So, I do understand the “code is bloated” sentiment, but fail to take away any concrete points from the article.
He’s saying that if you build an application in C, he can build the same application with 1% of the code, as measured in total source-code bytes between the NAND gates (i.e. including the compiler, linker, operating system, etc).
Now, of course this is a trivial program, but this still means that the claim is empty (a “true Scotsman”-like claim). The same goes for anyone who’s claiming that it’s impossible to write a 100% secure program (or bug-free, for that matter). When I oppose with an example with this, people usually refine to “it’s impossible to write a nontrivial program that is perfectly secure”. That is something that I can’t deny, because it’s so subjective. If you define a program as trivial as long as it has no bugs, you’re certainly right.
(BTW, I don’t deny that it can be unfeasible to make a sufficiently complicated program bug-free. That is something completely different than “there can be no complicated bug-free program”. I also consider bugs caused by the underlying infrastructure, whether software or hardware, to be not in the program. The CompCert is a practical example of a program which is well underway to be bug-free. I’d argue that a compiler for a simpler language, which less back-ends, would be non-trivial and could be completely bug-free.)
It’s obviously wrong. I’d like to see him do … in 0.73 bytes in Forth.
You’re only counting a fraction of the C code, not the C compiler, the C standard library, or the operating system. You have to count them because you might have to debug them.
Also note his challenge is designed to be fair: But I’m game. Give me a problem with 1,000,000 lines of C. But don’t expect me to read the C, I couldn’t. And don’t think I’ll have to write 10,000 lines of Forth. Just give me the specs of the problem, and documentation of the interface.
It’s very easy to build contrived examples in order to miss the point, but this is only fun when we’re young. There’s real value in trying to understand what Chuck discovered, and if you can puzzle it out to understand the way that Forth can be 100x more productive than C, you’ll probably write better C as well.
Oh sorry, I totally misread your last comment. I thought it said excluding the compiler, linker, OS, etc. Now it starts to make more sense, and I finally get his point. But hey, my point was (like codey’s and alexandria’s) that the post is vague: Your explanation is a lot clearer and shorter than his post.
While obviously not literally 1% of the code, I think this shows that the claim is not empty, but in fact supports his point. 4 bytes of non-data code vs 59.
As I explained above, I think the claim he’s making is vague. If the claim was that the size of an arbitrary C program can be reduced to 1% of it using Forth, then my example shows that this is obviously not true. Even if you cheat by counting only non-data code, it’s still off by a factor of 6. As geocar explained, this is not the claim: He counts code in the OS, compiler, linker, etc. as well, which makes his claim much more plausible, since these are typically huge swaths of code.
So my point is, if you have a point, state it clearly and don’t make any invalid claims just because they support your point.
I think Chuck Moore could write Java programs that would be 100 times shorter and less buggy than programs written by other people. It’s not the language.
It has elaborate sytnax[sic]. Rules that are supposed to promote correctness, but merely create opportunity for error.
It would help if you could give an example. Are you talking about the MISRA-C rules? Or random rules? What? What are the rules concerning? etc. This information is so general that it can’t be countered or even grokked properly.
It has considerable redundancy.
Again, what do you mean by ‘redundancy’? Are you talking about function reuse? Reuse of if/for/while constructs? What?
It’s strongly typed, with a bewildering variety of types to keep straight. More errors.
I don’t understand this. C has three or four groups of types at best. void*/ptrdiff_t, integer, float, char*. You can convert more-or-less freely within these groups. You should take care while converting from one group to the other (For example, if converting from float to int, use lrint and friends and check for fetestexcept, etc.). This depends on knowing what you want out of the type and knowing what the type needs from you. As a rule of thumb, use size_t for iteration and indexing, if you need to return a size or an error, use ssize_t. For working with characters, your unicode library should give you a type for dealing with them and ways of converting safely between char* and whatever that type is.
As an infix language, it encourages nested parentheses. Sometimes to a ludicrous extent. They must be counted and balanced.
So does every other infix language, and so does Lisp, which isn’t infix. I feel though, that this comes down to knowing your language. Haskell, which has clever methods (like $) of forgoing parentheses, is much more difficult to follow as someone who isn’t really very familiar with it. But I’m not going to complain about Haskell having that, because it’s a feature of the language that (if I were writing Haskell code) I must learn to work with it effectively. Likewise, if you use C, you need to know, even if it is very rough knowledge, operator precedence.
It’s never clear how efficiently source will be translated into machine language. Constructs are often chosen because the programmer knows they’re efficient. Subroutine calls are expensive.
Modern Intel architectures make this pretty easy: Avoid branches, keep mind of the cache. See http://nothings.org/computer/lexing.html for an example of complex computing without branches.
Because of the elaborate compiler, object libraries must be maintained, distributed and linked. The only documentation usually addresses this (apparantly difficult) procedure.
Ehh? For a start, most other languages that are contemporary with C, do this. Most other languages that maintain compatibility with C, do this. Personally it feels more complex to have to bundle an entire runtime system with your library’s object files (See: Ada), than just distributing the libraries. But ok.
Code is scattered in a vast heirarchy[sic] of files. You can’t find a definition unless you already know where it is.
Both cscope and grep exists. Use them.
Code is indented to indicate nesting. As code is edited and processed, this cue is often lost or incorrect.
Does Rust/Ada/Lisp/Pascal/Python not all do this? Wait. Are you comparing C and FORTH? I think a lot of this now makes sense.
There’s no documentation. Except for the ubiquitous comments. These interrupt the code, further reducing density, but rarely conveying useful insight.
You can use C with Doxygen or whatever. The library’s README or related documentation should cover using it. C lacks a good documentation system but really, so does a lot of the contemporaries. And at the end of the day, it’s not really the documentation system that exists but how the programmer uses it. You can write abysmal documentation in a language with an amazing documentation system.
Constants, particularly fields within a word, are named. Even if used, the name rarely provides enough information about the function. And requires continual cross-reference to the definition.
What’s the alternative here? Of course you need to know what a constant stands for to understand how it’s used. If I throw you the definition of F=MA, unless you’ve taken enough high-school physics to know that ‘F stands for Force, etc etc.’, ‘M stands for Mass which means […]’, ‘A stands for Acceleration which is […]’, then you’re going to be scuppered by the definition. This is a knowledge-transfer problem, a fundamental problem of grokking things, not a defect of any single programming language or dialect.
Preoccupation with contingencies. In a sense it’s admirable to consider all possibilities. But the ones that never occur are never even tested. For example, the only need for software reset is to recover from software problems.
Are they not? Is this not true for every language? Humans write tests, humans are fallible and might ignore your amazing tool to tell them how much code they’ve written tests for. If you make a tool that forces them to get 100% code coverage, they’ll just write the code to deal with less eventualities, so there is less code to test, which leads to shoddier code! It’s the same quandary with documentation. That’s not going into the fact that it’s a fallacy to test all of your code anyway (Although I agree you should aim for 100% coverage, ideally).
Names are short with a full semantic load.
It’s kind of funny that you talk about having to jump everywhere for definitions in C. Forth makes that worse because instead of being able to abstract things away, you have to essentially understand the entire codebase. Everything deals with the stack, and each forth word does not signal how much of the stack it deals with. Thus to understand one definition you have to understand how all definitions beneath that use the stack, this goes on until you are at the primitives that forth has given you. Forth seems actively hostile to abstraction. There are two facts of life: A) Any non-trivial program will have a large amount of words. B) Any given programmer can come up with a definition of a word that does not match the one in your mental model.
C has syntax to deal with that. It has comments, interfaces, types, and named parameters. I agree that maybe there are much better tools for the job, but here is where Forth does worse than C (No types to tell you that “carry” is an integer and not a double. No named parameters so there is no “carry”, and you don’t necessarily have the same definition of “add” that the original programmer did), and carries much of the same complaints that you listed earlier in the article!
Another difficulty is the mindset that code must be portable across platforms and compatible with earlier versions of hardware/software. This is nice, but the cost is incredible. Microsoft has based a whole industry on such compatibility.
Write to POSIX, and it’s supported everywhere. I’m not sure what you want the alternative to be here. Do you want software to not be compatible with different operating systems? Or different processor architectures? What?
I think a lot of that article makes more sense if you consider embedded programming, where they don’t have POSIX and such (or even documented opcodes), which I hear FORTH is popular with
Console/handheld game development is an interesting case for non-portable code, too. You can write a game for a single platform, and get a better result by not trying to make it compatible with others. Or maybe target two or three, and ignore the infinite other possible machines you could make it compatible with
Those are also cases where maintenance is less of an issue. I haven’t written more than a trivial amount of FORTH, but I would not want to maintain it over a long period, because it looks like hell to refactor/rearchitect it after its written.
It looks like its strong suit is programs that you can write once, and throw out and write again when the hardware or the problem changes
I think the author is arguing that all/most programs are that kind. I don’t agree with that, but it’ll be good to have their voice in my head when I go to write some code that’s more general than it needs to be
Forth makes that worse because instead of being able to abstract things away, you have to essentially understand the entire codebase. Everything deals with the stack, and each forth word does not signal how much of the stack it deals with. Thus to understand one definition you have to understand how all definitions beneath that use the stack, this goes on until you are at the primitives that forth has given you. Forth seems actively hostile to abstraction
I’ve often felt (without much familiarity with the language, admittedly)that forth’s ergonomics would be greatly improved by having words declare their stack effects as “function signatures” and have them checked. at the least I’d like to see a forth-like language that explored that idea, possibly with types as well. though maybe that goes fundamentally against the code-as-data model?
You have to add all the code to know when you’re doing a type check, you need to have code to handle all the different cases that are being analysed (including varargs), you need to be able to keep track of words that are already valid and called from a parent word lest you compute the safety every time (cache) - not sure how rdrop would work in this kind of type checked system either, since an rdrop isn’t a return at all - plus words that perform i/o being special cases…
Worst of all, you have to add types. Just sounds like a lot of complexity for what could be gained by writing short, simple to follow definitions.
I feel like I’m forever chasing the mystique of 1% of the code for 80% of the functionality. I’ll settle for 5, 10, or 20%.
Most conventional languages let you down quite a bit, though, because they tend to be eager, have fixed syntax (this relates to eagerness) and impose a [small] runtime penalty for encapsulation. Also, the cultures in those languages tend to worship bigger libraries rather than actively questioning why they get so big. It seems to evoke some misguided notion of safety. I’m not sure what the answer is here, but I do think it is a valuable thing to continue investigating.
In the linked article, Charles Moore claims (somewhat credibly, given his semiconductor CAD example) to be able to write important pieces of software (like a semiconductor CAD package) in 1% as many lines (or bytes) as popular commercial programs written in C. For an existing C program written in 50,000 lines, he claims he can write something very similar in 500 lines of FORTH, without resorting to cryptic code-golf.
I think C dominated because it went hand-in-glove with the microprocessors of the 80s. It had enough structure to make a program somewhat portable across the 8/16/32 bit micros of the day, but not so much structure that it could lead to storage/performance penalties on lesser machines. What made it ugly–undefined behavior, macros everywhere, an avalanche of compiler flags–is in some degree responsible for its success.
For embedded projects–especially small ones–Chuck’s trade-off of throwing that structure away and surrendering to the target architecture makes a lot of sense. It’s starting over, but with a clearer picture of what your tool is doing, and a sharper focus on what you’re trying to do with it. Not being a a C-like language with function signatures, there is more pressure to actually document what a word does rather than resting on the fiction of “self-documenting code”. There is also the underlying assumption that the project will be a “snowflake design”–rather than the foundation of computing for the next 40 years–so there is far less pressure to over-engineer, which is a cancer.
And even for larger projects, there is still some validity to what he is saying. We try to port these C libraries to new architectures for so long that everything eventually degrades into macro-soup, and then it gets re-written in-place (LibreSSL, neovim, etc) anyway. The kernels take a more incremental approach to it, but same thing.
In terms of corporate market-driven bean-counting efficiency, the Chuck Moore approach is obviously a doomed crusade. Craftsmanship on one machine leaves one vulnerable to shifts in the marketplace. WordPerfect vs Word is probably the best example of this. We’ll just keep whittling down our C codebases, and our self-respect.
Sadly, that is not an undesirable result. Bloated code does not just keep programmers employed, but managers and whole companies, internationally.
Compact code would be an economic disaster. Because of its savings in team size, development time, storage requirements and maintainance cost.
It’s that sort of like the “broken window fallacy”? Yes, in today’s world if all the extra bloat suddenly disappeared, it could cause huge problems. But if that bloat never existed in the first place, all those resources would have been been free to use for more, better things.
As someone who occasionally programs Haskell, I would not even dare to call C a Strongly Typed language. Moreover: The abstractions I learned because of Haskell, have substantially improved the readability and maintainability of the C-code I write, although I have to admit that not all abstractions are useful.
I usually fix this with a run of Doxygen.
It’s nice to have seen this and I agree that this article should make use think about how dirty our code often is, but I am missing a real conclusion here.
C isn’t strongly-typed, but it is wrongly-typed for application programming. The types encode the wrong things unless you’re writing an OS and “an unsigned integral value of this many bits” is all of the semantics most of your data can ever have.
For applications, most semantics don’t have anything to do with size of the data in bits. Semantics are things like “height of person” or “width of page” or something you can track back to the physical world, or, maybe, something like “course prerequisite” which isn’t physical per se but still has an existence outside of the program you’re writing. Representations of those types, like “is this height in inches or centimeters”, are important, sure, but they’re also a distraction from the code’s logic, and something computers can handle better by way of very disciplined auto-conversion, auto-conversions which preserve all relevant semantics. Ideally, the programmer would only work in heights and weights, for example, and the computer (interpreter runtime, compiled code… ) would silently and efficiently shift representations between units of measure, numeric types and strings, and integers and floats. Scheme has had a numeric tower concept for decades now, which does some of this, but that’s only one piece of the puzzle here.
Some domains require programmers to think hard about machine-level details. OS kernels are one example. Code which pushes the limits of floating-point precision is another. Code which absolutely must meet very tight performance desiderata is a third example. Those programs will be written in languages like Fortran and C and assembly for a long time to come. However, we got over writing everything in C long ago, and we have many fewer buffer overflows for it.
Right. That’s because it’s a… drumroll Systems-level programming language. Not an application-level one.
And yet a lot of applications got written in it.
I don’t think anyone is denying that. Plenty of people, for one reason or another (e.g. maybe it’s all they are familiar with?), end up using the wrong tool(s) for the job.
Exactly! It’s not about a specific programming language, but it is about picking the language that makes it easiest to accomplish your goals, without sacrificing a lot of performance.
If we are talking embedded systems, resource intensive, time or performance-intensive problems then C/C++ is often the way to go.
If we are talking about regular desktop applications, then often Java should be the only way (as you’d get all the ARM and other systems for free), or C# if you are confined to MS-like platforms.
If we are talking about complex theoretical problems, use Haskell. For example: I can solve 30% of the first 100 Project Euler questions in just one single line of code, 50% in less than 5 lines of Haskell, and 70% in less than 10. 25 different assignments in a single .hs file is not uncommon.
“then C/C++ is often the way to go”
Or Ada or Rust if balancing correctness/safety/security with performance.
Acutally no. In that case there is absolutely nothing to balance. It’s performance all the way without any compromises and any overhead is unacceptable.
Ada is what you could use in safety critical applications, allthough those are often written in Erlang or C as well.
And sorry, Go and Rust are nice, but I would not build any system on top of those languages yet. They simply have not been around for long enough and they could still disappear in no time because they still are niche languages that are barely used outside of Silicon Valley.
In fact, it’s already hard enough to find python developers as it is. Using Go or Rust would narrow the recuiting pool down to just about a hundred individuals for just about everywhere except for Silicon Valley.
You definitely can use Ada in those applications since it’s big in performance- and time-critical applications. It lets you turn the safety off as much as you need. Then, SPARK can get it back on some things without runtime checks.
Far as Rust, that’s true if you want lowest-common denlminator. Jane St’s counter-argument for Ocaml was it lets you recruit higher caliber programmers that might also be more motivated since they get to use better tools. That could apply for Rust.
Whereas, Go is a strange example because it’s designed to be quick to learn for non-programmers and even quicker for programmers. Anyone that learned huge platforms like .NET or Java can pick up Go easily. Same for anyone doing C/C++.
I’m not convinced that using a specific programming language helps you attract higher quality programmers. That’s something you’ll have to check for during the hiring phase.
Oh, Im all for collecting evidence of whether the claim is true or false in practice. I just offered it as the other possibility that comes with an unusual language.
FWIW, I think that’s a “high demand” problem, not a “low supply” problem, based on my experience.
Or cscope, ctags, etc! There are lots of tools for source base navigation.
I’m not entirely sure what to conclude from this article.
That Charles Moore is a unique person with unique abilities. We can learn from him, but we cannot be him.
We’re also in a slightly different era.
We’re so hellbent on seeing software engineering as a ‘team’ activity to the point that we’d rather cripple the expressive capabilities of languages so as to make it easier to scale teams up. This isn’t wrong, but it isn’t necessarily right, either.
This dilemma could be easily solved with a training budget ;).
That you should Be Moore Like Chuck.
This (although I agree with the sentiment that most software is bloated, even though I think many OO languages suffer from this problem more than C). I’m also not sure what “1% the code” means.
It’s a literal 1% of the amount of code.
I understand that, but the whole post is a bit vague. Shouldn’t it be “1% of the code”? Or is it used in an imperative mood? And what is the statement? “1% the code” is not a statement. I’m not sure what he’s claiming either. That he can implement C code in 1% of the line count in Forth? That obviously doesn’t hold for all C code.
So, I do understand the “code is bloated” sentiment, but fail to take away any concrete points from the article.
He’s saying that if you build an application in C, he can build the same application with 1% of the code, as measured in total source-code bytes between the NAND gates (i.e. including the compiler, linker, operating system, etc).
Thanks. I was wondering for two reasons:
in 0.73 bytes in Forth.
Now, of course this is a trivial program, but this still means that the claim is empty (a “true Scotsman”-like claim). The same goes for anyone who’s claiming that it’s impossible to write a 100% secure program (or bug-free, for that matter). When I oppose with an example with this, people usually refine to “it’s impossible to write a nontrivial program that is perfectly secure”. That is something that I can’t deny, because it’s so subjective. If you define a program as trivial as long as it has no bugs, you’re certainly right.
(BTW, I don’t deny that it can be unfeasible to make a sufficiently complicated program bug-free. That is something completely different than “there can be no complicated bug-free program”. I also consider bugs caused by the underlying infrastructure, whether software or hardware, to be not in the program. The CompCert is a practical example of a program which is well underway to be bug-free. I’d argue that a compiler for a simpler language, which less back-ends, would be non-trivial and could be completely bug-free.)
You’re only counting a fraction of the C code, not the C compiler, the C standard library, or the operating system. You have to count them because you might have to debug them.
Also note his challenge is designed to be fair: But I’m game. Give me a problem with 1,000,000 lines of C. But don’t expect me to read the C, I couldn’t. And don’t think I’ll have to write 10,000 lines of Forth. Just give me the specs of the problem, and documentation of the interface.
It’s very easy to build contrived examples in order to miss the point, but this is only fun when we’re young. There’s real value in trying to understand what Chuck discovered, and if you can puzzle it out to understand the way that Forth can be 100x more productive than C, you’ll probably write better C as well.
Oh sorry, I totally misread your last comment. I thought it said excluding the compiler, linker, OS, etc. Now it starts to make more sense, and I finally get his point. But hey, my point was (like codey’s and alexandria’s) that the post is vague: Your explanation is a lot clearer and shorter than his post.
.( Hello, world!)
While obviously not literally 1% of the code, I think this shows that the claim is not empty, but in fact supports his point. 4 bytes of non-data code vs 59.
As I explained above, I think the claim he’s making is vague. If the claim was that the size of an arbitrary C program can be reduced to 1% of it using Forth, then my example shows that this is obviously not true. Even if you cheat by counting only non-data code, it’s still off by a factor of 6. As geocar explained, this is not the claim: He counts code in the OS, compiler, linker, etc. as well, which makes his claim much more plausible, since these are typically huge swaths of code.
So my point is, if you have a point, state it clearly and don’t make any invalid claims just because they support your point.
You are taking the concept far too literally, please stop being obtuse.
I think Chuck Moore could write Java programs that would be 100 times shorter and less buggy than programs written by other people. It’s not the language.
It would help if you could give an example. Are you talking about the MISRA-C rules? Or random rules? What? What are the rules concerning? etc. This information is so general that it can’t be countered or even grokked properly.
Again, what do you mean by ‘redundancy’? Are you talking about function reuse? Reuse of if/for/while constructs? What?
I don’t understand this. C has three or four groups of types at best.
void*
/ptrdiff_t
,integer
,float
,char*
. You can convert more-or-less freely within these groups. You should take care while converting from one group to the other (For example, if converting from float to int, uselrint
and friends and check forfetestexcept
, etc.). This depends on knowing what you want out of the type and knowing what the type needs from you. As a rule of thumb, usesize_t
for iteration and indexing, if you need to return a size or an error, usessize_t
. For working with characters, your unicode library should give you a type for dealing with them and ways of converting safely betweenchar*
and whatever that type is.So does every other infix language, and so does Lisp, which isn’t infix. I feel though, that this comes down to knowing your language. Haskell, which has clever methods (like
$
) of forgoing parentheses, is much more difficult to follow as someone who isn’t really very familiar with it. But I’m not going to complain about Haskell having that, because it’s a feature of the language that (if I were writing Haskell code) I must learn to work with it effectively. Likewise, if you use C, you need to know, even if it is very rough knowledge, operator precedence.Modern Intel architectures make this pretty easy: Avoid branches, keep mind of the cache. See http://nothings.org/computer/lexing.html for an example of complex computing without branches.
Ehh? For a start, most other languages that are contemporary with C, do this. Most other languages that maintain compatibility with C, do this. Personally it feels more complex to have to bundle an entire runtime system with your library’s object files (See: Ada), than just distributing the libraries. But ok.
Both cscope and grep exists. Use them.
Does Rust/Ada/Lisp/Pascal/Python not all do this? Wait. Are you comparing C and FORTH? I think a lot of this now makes sense.
You can use C with Doxygen or whatever. The library’s README or related documentation should cover using it. C lacks a good documentation system but really, so does a lot of the contemporaries. And at the end of the day, it’s not really the documentation system that exists but how the programmer uses it. You can write abysmal documentation in a language with an amazing documentation system.
What’s the alternative here? Of course you need to know what a constant stands for to understand how it’s used. If I throw you the definition of F=MA, unless you’ve taken enough high-school physics to know that ‘F stands for Force, etc etc.’, ‘M stands for Mass which means […]’, ‘A stands for Acceleration which is […]’, then you’re going to be scuppered by the definition. This is a knowledge-transfer problem, a fundamental problem of grokking things, not a defect of any single programming language or dialect.
Are they not? Is this not true for every language? Humans write tests, humans are fallible and might ignore your amazing tool to tell them how much code they’ve written tests for. If you make a tool that forces them to get 100% code coverage, they’ll just write the code to deal with less eventualities, so there is less code to test, which leads to shoddier code! It’s the same quandary with documentation. That’s not going into the fact that it’s a fallacy to test all of your code anyway (Although I agree you should aim for 100% coverage, ideally).
It’s kind of funny that you talk about having to jump everywhere for definitions in C. Forth makes that worse because instead of being able to abstract things away, you have to essentially understand the entire codebase. Everything deals with the stack, and each forth word does not signal how much of the stack it deals with. Thus to understand one definition you have to understand how all definitions beneath that use the stack, this goes on until you are at the primitives that forth has given you. Forth seems actively hostile to abstraction. There are two facts of life: A) Any non-trivial program will have a large amount of words. B) Any given programmer can come up with a definition of a word that does not match the one in your mental model.
C has syntax to deal with that. It has comments, interfaces, types, and named parameters. I agree that maybe there are much better tools for the job, but here is where Forth does worse than C (No types to tell you that “carry” is an integer and not a double. No named parameters so there is no “carry”, and you don’t necessarily have the same definition of “add” that the original programmer did), and carries much of the same complaints that you listed earlier in the article!
Write to POSIX, and it’s supported everywhere. I’m not sure what you want the alternative to be here. Do you want software to not be compatible with different operating systems? Or different processor architectures? What?
Sorry if you were mislead otherwise, but this wasn’t written by me - it was written by Charles “Chuck” H. Moore of Forth fame. A while ago, too.
I think a lot of that article makes more sense if you consider embedded programming, where they don’t have POSIX and such (or even documented opcodes), which I hear FORTH is popular with
Console/handheld game development is an interesting case for non-portable code, too. You can write a game for a single platform, and get a better result by not trying to make it compatible with others. Or maybe target two or three, and ignore the infinite other possible machines you could make it compatible with
Those are also cases where maintenance is less of an issue. I haven’t written more than a trivial amount of FORTH, but I would not want to maintain it over a long period, because it looks like hell to refactor/rearchitect it after its written.
It looks like its strong suit is programs that you can write once, and throw out and write again when the hardware or the problem changes
I think the author is arguing that all/most programs are that kind. I don’t agree with that, but it’ll be good to have their voice in my head when I go to write some code that’s more general than it needs to be
I’ve often felt (without much familiarity with the language, admittedly)that forth’s ergonomics would be greatly improved by having words declare their stack effects as “function signatures” and have them checked. at the least I’d like to see a forth-like language that explored that idea, possibly with types as well. though maybe that goes fundamentally against the code-as-data model?
In the factor variant of forth does some static checking.
One of the tenets of learning Factor is to take the mantra “Factor is not Forth” to heart.
They missed a trick by not calling it FINF instead.
No offense meant :) I admire factor from a distance.
“Checking” them would be complicated (and pointless). They are however declared with stack comments in good code, i.e.
: SQUARE ( n -- n) DUP * ;
why would it be pointless? if i could declare
that could be automatically checked for consistency and type safety without reducing the expressiveness or ease of use of the language.
You have to add all the code to know when you’re doing a type check, you need to have code to handle all the different cases that are being analysed (including varargs), you need to be able to keep track of words that are already valid and called from a parent word lest you compute the safety every time (cache) - not sure how rdrop would work in this kind of type checked system either, since an rdrop isn’t a return at all - plus words that perform i/o being special cases…
Worst of all, you have to add types. Just sounds like a lot of complexity for what could be gained by writing short, simple to follow definitions.
I feel like I’m forever chasing the mystique of 1% of the code for 80% of the functionality. I’ll settle for 5, 10, or 20%.
Most conventional languages let you down quite a bit, though, because they tend to be eager, have fixed syntax (this relates to eagerness) and impose a [small] runtime penalty for encapsulation. Also, the cultures in those languages tend to worship bigger libraries rather than actively questioning why they get so big. It seems to evoke some misguided notion of safety. I’m not sure what the answer is here, but I do think it is a valuable thing to continue investigating.
I am not familiar with this language. Is “one percent” being used as an verb here?
In the linked article, Charles Moore claims (somewhat credibly, given his semiconductor CAD example) to be able to write important pieces of software (like a semiconductor CAD package) in 1% as many lines (or bytes) as popular commercial programs written in C. For an existing C program written in 50,000 lines, he claims he can write something very similar in 500 lines of FORTH, without resorting to cryptic code-golf.
I think it is!
I think C dominated because it went hand-in-glove with the microprocessors of the 80s. It had enough structure to make a program somewhat portable across the 8/16/32 bit micros of the day, but not so much structure that it could lead to storage/performance penalties on lesser machines. What made it ugly–undefined behavior, macros everywhere, an avalanche of compiler flags–is in some degree responsible for its success.
For embedded projects–especially small ones–Chuck’s trade-off of throwing that structure away and surrendering to the target architecture makes a lot of sense. It’s starting over, but with a clearer picture of what your tool is doing, and a sharper focus on what you’re trying to do with it. Not being a a C-like language with function signatures, there is more pressure to actually document what a word does rather than resting on the fiction of “self-documenting code”. There is also the underlying assumption that the project will be a “snowflake design”–rather than the foundation of computing for the next 40 years–so there is far less pressure to over-engineer, which is a cancer.
And even for larger projects, there is still some validity to what he is saying. We try to port these C libraries to new architectures for so long that everything eventually degrades into macro-soup, and then it gets re-written in-place (LibreSSL, neovim, etc) anyway. The kernels take a more incremental approach to it, but same thing.
In terms of corporate market-driven bean-counting efficiency, the Chuck Moore approach is obviously a doomed crusade. Craftsmanship on one machine leaves one vulnerable to shifts in the marketplace. WordPerfect vs Word is probably the best example of this. We’ll just keep whittling down our C codebases, and our self-respect.
This
It’s that sort of like the “broken window fallacy”? Yes, in today’s world if all the extra bloat suddenly disappeared, it could cause huge problems. But if that bloat never existed in the first place, all those resources would have been been free to use for more, better things.