Object-oriented programming is, I think, one of the most misapplied and poorly-understood concepts, especially by its biggest proponents.
An “object” in the mathematical sense means something that (a) will not further be interpreted and (b) for which incomplete knowledge is allowable. For example, the base substrate of mathematics is set theory, in which one can arduously build up all of mathematics. An alternative logic might include primitives like “True” and “False” that are not further interpreted or reified into sets. Existential and universal quantifiers implicitly assume a domain of discourse.
Files are the original object. Exactly what is in the C FILE type doesn’t matter. You can have incomplete knowledge of it– most people have never dealt with tracks and sectors on a hard disk, and that’s OK. It’s a thing that you can read from and possibly write to. It might live on disk, or in memory, or be an interface to a device. Object interfaces are also useful when you want local interpretation, e.g. to let DenseMatrix vs. SparseMatrix define separate functions corresponding to the same mathematical concept (e.g. determinant). Furthermore, object boundaries can be useful when developing a system locally that might later be factored into multiple services.
Inheritance, on the other hand, is mostly bad and it enables sloppy designs. It loads a lot of complexity on to this concept of identity as something apart from (immutable, identity-free) data. Consequently it leads to the codified bureaucracy of EnterpriseContextManagerVisitorFactory classes that don’t seem to do much but can’t be excised because no one really knows for sure.
The truth about OOP is that there are some good ideas in it but most of them should be used very rarely, in favor of immutable data, referential transparency, and mutability only when needed or as a performance optimization and with strict attention to its containment and correctness (i.e. that its only “side effects” are actually a desired main effect). OOP should be an advanced topic taught to people who already need to program, rather than what was originally been sold as, which is a magic sauce that turns mediocre/business-grade programmers into productive ones. (At that, it failed.)
You might enjoy this paper by Meyer that describes most aspects of programming in set theory.
I liked it since it was so intuitive and precise at same time. Idk formal methods but sets and set operations are fairly straight-forward. That’s same reason Z notation was successful in high-assurance systems as the language wasn’t too heavy.
Thanks! I’ll read it.
Inheritance: Fast Food.
Composition: A french buffet with strange names.