No, no, it’s outside the environment. There’s nothing out there, but servers…and computers…and BSD users.
[Comment removed by author]
Coming from another angle, the animosity seems to be between people who see C as a high level assembler and people who recognize the abstract machine and its semantics. This isn’t just a compiler authors vs C programmers thing.
The cool thing about UB is that it allows an implementation to be a high level assembler or not. It allows very simple, naive implementations as well as complicated optimizing ones. You as a user should get to choose. The choice isn’t made for you by the standards committee.
As soon as you ask the committee to take out UB and dictate specific behaviors, you’re trying to use their hand to force C into becoming one thing above the other.
See limits.h. anyways I’m amazed at how contraversial it is to object to terrible software design. Surprises are bad. Demands that users scrutinize details of a complex illogical standard and then guess what common idioms may fail are symptomatic of a dysfunctional Dev team.
As soon as you ask the committee to take out UB and dictate specific behaviors, you’re trying to use their hand to force C into becoming one thing above the other.
Couldn’t you get rid of UB but make it all “implementation defined”?
Think about it.
That would not help as much as you might think. It’s either a terrible documentation burden for the implementation (which entails more than just the compiler!), or they’re going to write something like “anything might happen”, which is as good as UB. Whiners who whined about UB would now just whine about terrible vague implementation defined behavior they can’t rely on.
For a concrete example, try to define and document the behavior for array out of bounds access. What might happen? A segfault, perhaps. Or you overwrite some variable or pointer that changes your program behavior unpredictably. Or your code is running on some system with memory mapped io and that io write sets off the fire alarm and sprinklers. Or it launches a nuke.
Just about anything could happen, and it is impossible for the compiler writer to write a non-vague description unless their implementation always does some kind of array bounds checking to ensure something predictable always happens. (Good luck with arbitrary pointer arithmetic.)
How would you like an implementation that documents that null pointer checks following a deference of said pointer may be optimized out? We get the same problem, and same whining about it. Changing the standard like this doesn’t fix it.
Now if you want that array bounds checking and other implementation specific stuff, you can have that already without rewriting the standard. UB doesn’t mean “the implementation may not do something sensible and predictable and documented.”
Again, I consider this a feature. An implementation may be really simple, and you don’t have to use it, you can go find (or implement) an extreme implementation that has all the checks and guarantees (with documentation) that you want.
You don’t belong to the Internet, we don’t like sensible people or their sensible arguments around here.
How UB is treated is entirely up to the implementation. The standard doesn’t impose any requirements, but an implementation is free to provide any documented guarantees it wants.
So you have a beef with some implementation(s). And to force their hand, you would prefer to change the standard. That’s a disappointingly aggressive way to get where you want to be. Instead of whining about the standard, you could exercise the freedom it gives you to find or make an implementation that gives you all the guarantees you want. (You could start by using -fwrapv and by not abusing -O3; the manual of your friendly compiler probably has a lot more in store for you). Meanwhile the rest of us may continue to disagree with your opinion of the interpretation.
You are attempting to excuse poor engineering design. Try: “My default query optimizer for SQL will format the disk if the query tries to join incorrectly and the SQL standard says that query has undefined results.” Or: “My default memory map system for for the OS will replace your program with echo rm -rf * if you have a memory fault because POSIX does not mandate any particular behavior on memory faults”. To me, and this is just my opinion which you are of course free to reject, the purpose of software is to run applications. If your software breaks applications as an “optimization” that does not have a super compelling justification, then you should be fired.
I’ve posted up another video with Charles Forsyth & Bryan Cantrill on the same panel. I really like the talk as it really shows the completely different approaches to technology and mentality. One group has been polishing an idea in one way or another and the other is back from the future of the past.
Would’ve been interesting to evaluate Go, which claims descendance from both algol and c. Here the break statement is gone, instead each case block is evaluated only once. The special keyword “fallthrough” is added to merge logical case blocks together, the previous default behaviour for C. Each case is now an expression too. As with most things Go, not revolutionary.
more here: https://golang.org/ref/spec#Switch_statements
switch tag {
default: s3()
case 0, 1, 2, 3: s1()
case 4, 5, 6, 7: s2()
}
switch x := f(); { // missing switch expression means "true"
case x < 0: return -x
default: return x
}
switch {
case x < y: f1()
case x < z: f2()
case x == 4: f3()
}
The Plan 9 C compilers are fast. Really fast. I remember compiling kernels served from remote filesystems in ~6 seconds… Does marvels for quick turnaround time…
Yes. The whole plan 9 toolchain is a joy to use, and it’s amazing to see how the Plan9front people have kept it up to date and usable, with working SSH clients, wifi drivers, USB 3, hardware virtualization that can run Linux and Openbsd, and the cleanest NVMe driver I’ve ever seen.
I actually use the system regularly for hacking on things, while it’s definitely not the most practical choice, I really enjoy it.
Wait up, wifi drivers? I need to set that up on one of my several gazillion laptops posthaste
9front uses openbsd’s wireless firmware, so if your card works on obsd, itll probably work on 9front
It has far fewer drivers, though. You’ll probably have good luck with an older Intel card, but you should check the manual. As with all niche OSes, don’t expect it to work on random hardware out of the box. And as with many niche OSes, older thinkpads are usually a good bet for working machines.
yes good point, now that i am thinking about it, even then the support for wireless is quite bare. when i ran 9front on a thinkpad a couple years ago i think i recall the centrino wireless-n line of cards working well. for anyone interested, here are the docs
What made it (or make it) so fast? Did.it have to leave out some other feature to achieve that? Or was ist just the plan9 source, I remember hearing that it had no ifdefs and other preprocessor instructions, that let it compile so quickly.
the plan9 source was definitely a part of it (include files did not include other files), but the compiler itself also eschewed optimizations in favour of fast code generation. the linker wasn’t that fast, though.
Here’s a quote from the original papers that came out in the early nineties:
All in all the kernel was a few megabytes in size, compiling several hundred thousand lines of code. Comparably less than the core of the Linux kernel at the time, and not counting the many myriads of drivers linux had that Plan 9 didn’t. More here: https://9p.io/sys/doc/compiler.html