Personally the discussions on goto are getting rather old. People seem to latch on to tidbits of “wisdom” without understanding it. Without understanding the “wisdom” becomes a burden and causes countless pointless discussions like this. This is just like the Dynamic vs Static arguments that have morphed beyond the original meanings and are now strongly debated “wisdom” that doesn’t make any sense. (Referencing this and similar articles: Bellman Confirms A Suspicion - Where does “dynamic programming” come from?.)
Perhaps I’m a bit younger (I started working professionally in 2008); I’ve never heard anyone ever question the “goto considered harmful” paper. I found Linus' comments in this article fascinating. For the first time in my life I’m now considering the possibility that goto could improve clarity (I still probably won’t use it, but it’s opened up my mind).
I think where and what you’re working on is pertinent, e.g. if you’re writing a fast finite state machine, goto is indispensable. However, I’ll readily confess that practical uses for goto have become fewer and fewer.
I meant to reply to this a while ago, sorry…
That’s part of my complaint. It’s been forever since that paper was written and people STILL hold it up as indisputable fact. It’s an opinion, and a damaging one at that. It would be far better if goto was warned against rather than completely admonished. Like this and many other articles/discussions point out you can’t blame the language (except perhaps for Brain Fuck) for a programmer making a mess. Perl is a perfect example. I’ve seen so many people complain how unreadable it is. Perl is a perfectly readable language, if it is written clearly. You can produce read-only code in any language just as you can make spaghetti code without goto.
I’ve had a colleague that kept insisting that goto is bad and then produced something along these lines:
tmp = process(input);
result = further_process(tmp, 42);
if (result == -1)
} while (0);
Don’t touch my gotos. :-)
It sounds like your colleague was against labels, not gotos.
This is the reason im excited for new innovations in CPU architecture (i.e. the mill cpu). So many of our three point some billion instructions per second are being wasted waiting on memory. Its just absurd.
I suspect that’s really mostly because CPUs are marketed by performance, while RAM is marketed by capacity. There’s no particularly inherent reason memory needs to be so much slower than CPU, but when all the market pressures are for it to be bigger and cheaper rather than faster, it’s obviously not going to catch up.
There’s a lot more in play than just “market pressures”. Take, for example, the L1 cache in a recent CPU. This is an on-chip memory (SRAM) much smaller than main memory (say 32K), and with a vastly higher cost-per-unit-size. And it’s still “slower” than the CPU, in that it’ll typically take multiple cycles to access. The biggest constraints aren’t economic, they’re physical – electrical signals propagate through wires at finite speeds, and as your memory gets bigger (even at just a few KB of SRAM) this starts to be a non-negligible factor. So if you wanted a memory that was “as fast as your CPU”, you could build it, but it’d be so tiny as to be completely unusable. Imagine your register file being all the memory you had.
Notably, the PlayStation 4 has a unified memory system with 8 GB GDDR5—it clocks 176 GB/s! Maybe it’ll persuade other manufactures to adopt similar architectures?
But the problem is that for many (most?) workloads, the relevant aspect of memory performance that’s problematic isn’t bandwidth, it’s latency – and the two are often at odds with each other.
This is a tough problem.
How did C programmers solve this problem? (With mixing modules from compile-to-c languages)
Typically, C and C superset (e.g. C++ and Objective-C) applications either:
Use a few well-understood (e.g. GTK+, ncurses, zlib, et al.) dependencies. The requirements range from requiring (#include <dependency>) and linking to CMake magic.
Use monolithic frameworks (e.g. Boost, Cocoa, Unreal Engine, et al.). They mask most of the complexity from the programmer.