So the compiler will be delegated to check somehow that a signed operation is overflowing, but the programmer cannot do that by hand because there is the risk that the check will be deleted. This looks like a cheap and ugly solution to me. Maybe a better solution in this case would be to force the compiler to document its representation of signed integral types so that the programmer is aware of the consequences of writing code that overflows instead of triggering undefined behavior.
Really? Cause this seems like the right thing to me. Instead of forcing everyone to put in code to check for overflow without any help, this provides a way to do it that once optimized should be as efficient as checking the flag the processor sets. The compiler understands the architecture and can implement overflow checking “correctly”. Anything else is a hack.
I cannot follow correctly your train of thought. Why is the case that the compiler understands better than the programmer the intention of the programmer and the architecture? These are the first priorities of people who write systems. You can outsource these concerns to a compiler if you write higher-level systems, no problem with that, but when you are given a piece of hardware and you want to write code for it, you need to be in absolute control.
Optimizations are implementation details in this discussion. The compiler could as easily optimize my manually written code that checks for overflow by checking a processor flag.
If the programmer intended it then it should be expressed in the code. By using something like an explicit wrapping arithmetic as opposed to implicitly relying on the wrapping behavior with nothing in the code conveying that to future readers who need to maintain that system. The key is that the programmer and the compiler should be working together. They are not in conflict. Programmers even for specific embedded hardware choose aggressively optimizing compilers because they want those optimizations. And there’s a very easy way to get absolute control if you want it. Write the assembly yourself. but most people who work on systems don’t think about stuff like overflow behavior. Even before compilers were optimizing the hell out of stuff and in fwrapv codebases overflow vulnerabilities are distressingly common. Most of the time programmers just want straightforward integer arithmetic. The size of the type is not much cared about. I’m not defending C here, C has issues wrt wrapping being tied to signedness and lacking standard ways to express that no the wrapping is intended. But languages like Zig and Rust take advantage of the fact to put in safety checks that help prevent real life issues in production code (Zig will let you actually pick between undefined wrapping and panicking wrapping depending on your preference but you can override that on a per function basis so I think it’s typically better to leave panicking wrapping on globally. Rust just quietly overflows which is a bit unfortunate but understandable since integer arithmetic is safe and Rust doesn’t want to pay the cost for overflow checking but also doesn’t want you to be able to do UB from safe code so it’s correct in the context of Rust)
So the compiler will be delegated to check somehow that a signed operation is overflowing, but the programmer cannot do that by hand because there is the risk that the check will be deleted. This looks like a cheap and ugly solution to me. Maybe a better solution in this case would be to force the compiler to document its representation of signed integral types so that the programmer is aware of the consequences of writing code that overflows instead of triggering undefined behavior.
Really? Cause this seems like the right thing to me. Instead of forcing everyone to put in code to check for overflow without any help, this provides a way to do it that once optimized should be as efficient as checking the flag the processor sets. The compiler understands the architecture and can implement overflow checking “correctly”. Anything else is a hack.
I cannot follow correctly your train of thought. Why is the case that the compiler understands better than the programmer the intention of the programmer and the architecture? These are the first priorities of people who write systems. You can outsource these concerns to a compiler if you write higher-level systems, no problem with that, but when you are given a piece of hardware and you want to write code for it, you need to be in absolute control.
Optimizations are implementation details in this discussion. The compiler could as easily optimize my manually written code that checks for overflow by checking a processor flag.
If the programmer intended it then it should be expressed in the code. By using something like an explicit wrapping arithmetic as opposed to implicitly relying on the wrapping behavior with nothing in the code conveying that to future readers who need to maintain that system. The key is that the programmer and the compiler should be working together. They are not in conflict. Programmers even for specific embedded hardware choose aggressively optimizing compilers because they want those optimizations. And there’s a very easy way to get absolute control if you want it. Write the assembly yourself. but most people who work on systems don’t think about stuff like overflow behavior. Even before compilers were optimizing the hell out of stuff and in fwrapv codebases overflow vulnerabilities are distressingly common. Most of the time programmers just want straightforward integer arithmetic. The size of the type is not much cared about. I’m not defending C here, C has issues wrt wrapping being tied to signedness and lacking standard ways to express that no the wrapping is intended. But languages like Zig and Rust take advantage of the fact to put in safety checks that help prevent real life issues in production code (Zig will let you actually pick between undefined wrapping and panicking wrapping depending on your preference but you can override that on a per function basis so I think it’s typically better to leave panicking wrapping on globally. Rust just quietly overflows which is a bit unfortunate but understandable since integer arithmetic is safe and Rust doesn’t want to pay the cost for overflow checking but also doesn’t want you to be able to do UB from safe code so it’s correct in the context of Rust)