I just finished taking an operating systems class a few months ago, and my professor made this very clear. I’m surprised it’s something not everyone would be aware of. Generally, I like to assume all functions can fail, and then double check before using them to be sure.
C makes doing thorough error handling fairly tedious, and for most compilers you have to actively enable warnings about unchecked results from non-void functions. People usually know better, but people make mistakes. (Mistakes that are easily caught by better error-handling strategies.)
My favorite C error handling story:
A couple years ago, I was working on an automotive embedded project that read some some factory-calibrated settings out of an external EEPROM chip. This included a 32-bit unsigned int for an overheating temperature threshold, so the device could shut itself down to avoid a potentially dangerous situation.
The previous developer had failed to check if the code that read from EEPROM actually succeeded. If the read failed, that region of memory would initialized to all 0xFF bytes. That’s pretty hot.
I also would have included a hard coded maximum in the code itself. Sometimes values are out of range but you can’t afford to exit()
That’s what I did - the calibration values were all limited, in case the saved values were uninitialized or corrupt.
This a wonderful example of a type-safety fail.
PIDs are represented as ints and while -1 should be interpreted as “fail” it is not a distinct type, so it’s possible to iterpret it as the wrong thing.
A better solution would be to return “option int” that would give None for failure and Some pid when it succeeds. In fact it’s easy to do this in C with a union structure.
I don’t see how a union would give you any more type safety. C only has untagged unions and doesn’t put any restrictions on reading the “wrong” element of a union at any point in time. You have to implement the tagging yourself, at which point you’re back where this started with manual error checking.