inheritance is something that makes your code hard to understand. Unlike function, which you can read just line by line, code with inheritance can play “go see another file” golf with you for a long time.
This isn’t an argument against inheritance, it’s an argument against modularity: Any time you move code out of inline you have the exact same “problem” (to the extent it is a problem) and you can only solve it the same way, with improved tooling of one form or another. ctags, for example, or etags in Emacs.
Inheritance has this problem to a much larger degree because of class hierarchies. Tracing a method call on a class at the bottom of the tree, requires checking every parent class to see if its overridden anywhere. Plain function calls don’t have that problem. Theres only a single definition.
Plain function calls don’t have that problem. Theres only a single definition.
Unless we start using higher order functions when the function is passed around as a value. Such abstraction creates the exact same problem, only now it’s called “where does this value originate from”.
Yes, which is why higher order functions are another tool best used sparingly. The best code is the most boring code. The most debuggable code is the code that has the fewest extension points to track down.
This is, of course, something to balance against debugging complicated algorithms once and reusing them, but it feels like the pendulum has swung too far in the direction of unwise extensibility.
The best is python code where the parent class can refer to attributes only created in child classes. There are equivalents, but less confusing, in languages like Java.
The overrides is mostly for modularity and reduce code duplication. Without classes, you might either end up with functions with tons of duplicated code, or tons of functions having a call path to simulate the “class hierarchies”.
And yes, it’s going to make the code harder to read in some cases, but it also makes the code much shorter to read.
Without classes, you might either end up with functions with tons of duplicated code
Why? There is literally no difference in code re-using between loading code through inheritance vs function calls, apart from possibly needing to pass a state to a function, that could otherwise be held in class instances (aka objects). this is certainly less than class definition boilerplates.
or tons of functions having a call path to simulate the “class hierarchies”
The call chain is there in both cases. It’s just that in the class-based approach it is hidden and quickly becomes a nightmare to follow. Each time you call a method statically or access a class atribute, you are basically pointing to a point in your code that could be hooked to different points in the hierarchy. This is a problem. People don’t think it is a big deal when they write a simple class and know it well, because the complexity is sneaky. Add another class and all of the sudden you brought in a whole other hierarchy into the picture. Each time you read “this” or “instance.something”, you’re up for am hunt. And each other hierarchy you bring into the picture increases complexity geometrically. Before you know, the project is unmanageable, the ones writing it went on to some green field project, doing a similar mess again for some poor soul to struggle with after them.
And yes, it’s going to make the code harder to read in some cases, but it also makes the code much shorter to read
It doesn’t really. People fall for this because you can instantiate a class and get a bunch of hidden references that are all available to you at will without you need to explicitly pass them to each method, but you only get this through defining a class, which is way more verbose than just passing the references you need.
All that said, what classes do offer in most languages is a scope that allows for fine grain control of data lifecycle. If we remove inheritance, then class members are akin to use global variables in non-OOP languages. But you can create as many scopes as you want. I which languages like python would do this as, for the same reason as OP, I suffer from working with OOP codebases.
You make it sound like inheritance is the only way to reduce code duplication. In my experience that is simply not true, you can always use composition instead. E.g. Haskell doesn’t support inheritance or subtyping and you still get very compact programs without code duplication.
Without classes, you might either end up with functions with tons of duplicated code, or tons of functions having a call path to simulate the “class hierarchies”
This is only true in my experience if you’re trying a functional approach with an OO mindset. There are other ways to solve problems, and many of them are far more elegant in languages designed with functional programming as the primary goal.
When you move a bit of code out of your file it’s not going to call back function from the first file. You going to even make sure this is the case, that there is no circular dependency, because it makes (in cases when a language allows to make you one) code harder to read. In case of inheritance, those games with calling everything around is just normal state of things.
Of course, example in the article is small and limited, because pulling a monster from somewhere is not going to make it more approachable, but surely you’ve seen this stuff in the wild.
You might do that, in the same way that you might carefully document your invariants in a class that allows inheritance, mark methods private/final as needed, etc. But you also might not do that. It sounds a bit as if you’re comparing well-written code without inheritance to poorly written code with it.
Not that there isn’t lots of terrible inheritance based code. And I’d even say inheritance, on balance, makes code harder to reason about. However, I think that the overwhelming issue is your ability to find good abstractions or ways of dividing up functionality–the choice of inheritance vs. composition is secondary.
It’s just that without inheritance it’s easier to make good abstractions. Inheritance affords you to do wrong thing easily, without any friction - just read a good article about that few weeks ago.
This isn’t an argument against inheritance, it’s an argument against modularity: Any time you move code out of inline you have the exact same “problem” (to the extent it is a problem) and you can only solve it the same way, with improved tooling of one form or another. ctags, for example, or etags in Emacs.
Not really, including code via accessing a class or object member forces you to manually go figure out which implementation is used, or where the implementation in a web of nested namespaces. In the case of function, each symbol is non-ambiguous. This is a big deal. If you have types A and B, with A having an attribute of the type B, each of these types containing a 3 level hierarchy, and you call A.b.some_b_method(). That could be defined in 9 different places, and if it is, you need to figure out which that symbol resolves to. This is a real problem.
This isn’t an argument against inheritance, it’s an argument against modularity:
Yeah, all code should be in a single file anyway. No more chasing of method definitions across multiple files. You just open the file and it’s all there…
Any form of modularity should be there to raise the level of abstraction. ie. Become a building block, that is solid (pun intended) firm and utterly reliable, that you can use to understand the higher layers.
You can peer inside the building block if you need to, but all you need to understand about it, to understand the next level up, is what it does, not how it does it.
Inheritance is there to allow you to know that “all these things IS A that”. ie. I can think of them and treat all of them exactly as I would treat the parent class. (ie. The L in SOLID)
I can utterly rely on the fact that the class invariant for the superclass holds for all subclasses. ie. The subclasses may guarantee other things, but amongst the things they guarantee, is that the super class’s class invariant holds.
I usually write a class invariant check for every class I write.
I then invoke it at the end of the constructor, and the beginning of the destructor, and at the start and end of every public method.
As I become more convinced of the correctness of what I’m doing, I may remove some for efficiency reasons. As I become more paranoid, I will add some.
In subclasses, the class invariant check always invokes the parent classes invariant check!
Similarly, modularization was a concept for a while but modules only appeared as a first-class language component with Modula, which came out in 1975. Even today most industry languages don’t have proper modules that encourage code specialization. Most languages with “modules” are really just namespaces.
I’m not familiar with “proper modules that encourage code specialization”. Can you elaborate more on that? What are they exactly? What are the current languages that have them?
You’ll want to look at how the ML family (excluding F#) does parameterized modules. You can import a module into another module and specialize it in the process, say by refining the type signatures. A few spec languages also use this, as it’s important to refinement proofs, but relatively few programming languages.
The part about django hits the nail in the head. Frameworks are the plague, libraries are awesome. Whenever I read the word ‘framework’ I run!
The idea that the application is written by framework authors and the user just places chunks of their own code here and there is just not sane. Whenever the user needs to do something that doesn’t fall into that workflow, then they are clueless and solve it by including another ‘framework’.
I see all of those people complaining about web frameworks, so I’d like you to show me, how do you do a “web library”. That is, a library, that is made for creating web servers, without requiring the user to reimplement the framework themselves.
macros built around said url dispatcher (depends on the language how fancy you can get with these)
a template library
orm/active record
a request parser
helpers for sending replies
pipping among your controllers/views/models/whatever-they-call-it. Sometimes this becomes more tedious than just using your language import mechanisms
Additionally, they often hide the main listening loop from you.
There is no reason for these to live inside a bundle called framework. One can implement these separately and mix and match.
Then you call write the listening loop yourself, or if you prefer to use a standardized application format such as WSGI, servlets, etc. you can bring in a plugable compatible web server library. Or you use a web server library, typically it comes with a method of function to start the main loop.
Java, python, and probably other languges, come with a standard api to write web applications. If you implement your application using those apis, you can then use a plugable server implementation. You don’t need a framework. Indeed most frameworks in these languages are built on top of these.
The usual definition of framework vs library is that your code calls the library code, and framework code calls your code. Notably the most useful feature of web frameworks, is the one that makes them frameworks - it’s the URL dispatcher. A lot of web frameworks don’t even have an ORM, or “pipping among your controllers/views/models/whatever-they-call-it”.
You can trivially do an url dispatcher that you have to call with an url (and other details of a request) and it gives you back a function or some identifier, so you can call it or something. That would be a library, right? Clojure’s web stack is built like that.
Now question - why don’t let the framework have this piece of code. What else would you want to do with the returned function than to call it? I don’t see the usefulness of excluding that tiny piece of code.
Well the thing is that if you just call that function then you’ve certainly have no other options. In case of a library you can supply identifiers instead of functions. You could have an async call instead of just calling a function. You could supply classes (ugh) there. You could send results of dispatch somewhere downstream - say, to Kafka, for, again, async processing.
Lots of options, because you’re deciding what to do. In case of framework you have no hand in decisions.
I see no point to. Why couldn’t the framework just handle async functions? Why couldn’t the framework have class based views? What’s the point of sending results from there - many frameworks have so called middleware, which integrates cleanly and is composable. Like, I don’t see any point to it, besides being able to say, “Only my code calls my code”, which is a lie anyways, since it’s the OS that calls your code in the first place, and in many languages, “main()” isn’t actually the first thing run. Libraries have their place, but there are places where frameworks are just simply better.
Why couldn’t the framework just handle async functions?
It could, until a new need arises!
there are places where frameworks are just simply better.
Yes, they are good when you don’t yet know how to do things. They allow you to start without learning too much. When you have enough knowledge, they are crutches.
How often do you add a new way to call your code? If it’s more often than your scheduled project rewrite, it’s too often.
Yes, they are good when you don’t yet know how to do things. They allow you to start without learning too much. When you have enough knowledge, they are crutches.
Then why do experienced developers make and use them? Probably because they went past “I don’t like it when I don’t know how absolutely everything is working over the hood” phase. Frameworks are fit to make code that handles requests. Be it network requests, UI requests, etc. Libraries are fit to answer requests. When using a framework, you are just writing another library.
Probably because they understand that when used properly it is a powerful tool. I think the problem with inheritance in modern languages is that it is too easy to use. It is too powerful to be used with simple syntax, IMO it should involve some arcane manipulations directly to the object model of the language.
How often do you schedule rewrites? What’s the point for scheduled rewrites?
Every two to four years. It happens naturally anyways in companies. You bring your tech stack up to date and learn from mistakes that you did the last time, while also integrating that hodge-podge feature that you did for that one client cleanly and reuseably.
Lots and lots of experienced developers do not make frameworks and do not use them.
I don’t know about you, but I don’t really see them in fields where using frameworks makes sense, such as web backends, UI, etc.
I’m building a product with a small team (I’m co-founder and CTO) and to me scheduling rewrites each 2-4 years sounds insane. I think we probably would just go bankrupt if we tried to do a rewrite for… for what?
Maybe I don’t understand what do you mean by “rewrite”. To me, rewrite is “let’s init a blank repository and start from scratch, but this time better than before, using new shiny technologies”.
It is probably more gradual with small teams, but it still does happen. Probably even more often, with small teams, since as the team expands more limitations are found with previous architecture.
I wouldn’t call that the most useful feature at all as that would be pretty trivial to implement when compared with, for example, request parsing.
The most ubiquous and usual, certainly yes.
But there is no reason why this needs to be provided as a framework other than an existing crowd of web developers that are not aware that all you would need is a library.
Take flask for example, most people define request handling functions wrapped In an annotation and don’t ever call it manually. The definition acts as the dispatching rule, as per rules provided in the decorators.
But there is no reason whatsoever why this is better than having a function that binds an URL rule to an handler function. In fact that is exactly all that happens in the decorator implementation. It just calls flask.add_url_rule(). You can do this yourself instead of using the decorators and get rid of the silliness of having a dependency calling your code rather than the other way around.
If you check your favourite framework URL dispatcher, you would be surprised how simple they are.
The reference to inversion of control toward the end is confusing to me. IoC != Inheritance. While I generally agree that inheritance is often not the way to go, IoC as a concept is vital to any large codebase where long-term maintainability is a priority.
You are right, these are two different tools, but IoC makes stuff convoluted and should be used very sporadically. There are similarities between them in regards of indirection of code evaluation.
In the end inheritance is worse because it makes it easy to make wrong thing, and IoC is a lot of friction and because of that is used rarely.
I think there’s a probably a place for inheritance, but I haven’t written a subclass in years. Working in golang without inheritance makes it really clear that there are two use cases for inheritance: overrides and callbacks. Overrides in go via struct embedding aren’t really dynamic dispatch so I find them very easy to reason about.
Callbacks on the other hand are imo better handled by members that are functions or interface values.
Ah right. :-) I’m using Clojure though. This post was prompted by finding that Turbolinks had been rewritten in heavily-OOP TypeScript and it was just impossible to find what I was looking for. On the contrary, in their old codebase, called now turbolinks-classic, I found that code in a few minutes.
What to do then? Well, I write in Clojure now and let me tell you that most of the time using functions is actually okay! If you’re doing Python, try to use more NamedTuples, dataclasses and similar: this way you’re developing with data and not with objects.
I feel like this post is missing the closure. How would you reorganize this specific code in Python, without the inheritance? How would the solution look in Clojure?
I would just do a protocol and implement it. And in Python I would do the same - an abstract base class and then implement it. Not much savings in code anyway!
To avoid the next OOPs. Say you have some command X. Ideally you’d like to predict what X does before you give it a go. This is precisely what type information (except in your language) does.
class Base:
def message(self):
return "I'm " + self.name()
class Inner(Base):
def name(self):
return "a freak"
x = Inner()
print(x.message()) # => I'm a freak
“I hear your band are selling their turntables and buying guitars.
…
I hear your band are selling their guitars and buying turntables.”
—LCD Soundsystem
When an old teacher of mine said that OOP is the “
goto
of the nineties”, this is exactly what he was referring to.This isn’t an argument against inheritance, it’s an argument against modularity: Any time you move code out of inline you have the exact same “problem” (to the extent it is a problem) and you can only solve it the same way, with improved tooling of one form or another.
ctags
, for example, oretags
in Emacs.Inheritance has this problem to a much larger degree because of class hierarchies. Tracing a method call on a class at the bottom of the tree, requires checking every parent class to see if its overridden anywhere. Plain function calls don’t have that problem. Theres only a single definition.
Unless we start using higher order functions when the function is passed around as a value. Such abstraction creates the exact same problem, only now it’s called “where does this value originate from”.
Yes, which is why higher order functions are another tool best used sparingly. The best code is the most boring code. The most debuggable code is the code that has the fewest extension points to track down.
This is, of course, something to balance against debugging complicated algorithms once and reusing them, but it feels like the pendulum has swung too far in the direction of unwise extensibility.
For extra fun, use higher-order functions with class hierarchies!
The best is python code where the parent class can refer to attributes only created in child classes. There are equivalents, but less confusing, in languages like Java.
Isn’t the example in the linked article doing exactly that?
Not exactly. :)
Check out Lib/_pyio.py (the Python io implementation) in CPython for lots of this.
The overrides is mostly for modularity and reduce code duplication. Without classes, you might either end up with functions with tons of duplicated code, or tons of functions having a call path to simulate the “class hierarchies”. And yes, it’s going to make the code harder to read in some cases, but it also makes the code much shorter to read.
Why? There is literally no difference in code re-using between loading code through inheritance vs function calls, apart from possibly needing to pass a state to a function, that could otherwise be held in class instances (aka objects). this is certainly less than class definition boilerplates.
The call chain is there in both cases. It’s just that in the class-based approach it is hidden and quickly becomes a nightmare to follow. Each time you call a method statically or access a class atribute, you are basically pointing to a point in your code that could be hooked to different points in the hierarchy. This is a problem. People don’t think it is a big deal when they write a simple class and know it well, because the complexity is sneaky. Add another class and all of the sudden you brought in a whole other hierarchy into the picture. Each time you read “this” or “instance.something”, you’re up for am hunt. And each other hierarchy you bring into the picture increases complexity geometrically. Before you know, the project is unmanageable, the ones writing it went on to some green field project, doing a similar mess again for some poor soul to struggle with after them.
It doesn’t really. People fall for this because you can instantiate a class and get a bunch of hidden references that are all available to you at will without you need to explicitly pass them to each method, but you only get this through defining a class, which is way more verbose than just passing the references you need.
All that said, what classes do offer in most languages is a scope that allows for fine grain control of data lifecycle. If we remove inheritance, then class members are akin to use global variables in non-OOP languages. But you can create as many scopes as you want. I which languages like python would do this as, for the same reason as OP, I suffer from working with OOP codebases.
You make it sound like inheritance is the only way to reduce code duplication. In my experience that is simply not true, you can always use composition instead. E.g. Haskell doesn’t support inheritance or subtyping and you still get very compact programs without code duplication.
This is only true in my experience if you’re trying a functional approach with an OO mindset. There are other ways to solve problems, and many of them are far more elegant in languages designed with functional programming as the primary goal.
When you move a bit of code out of your file it’s not going to call back function from the first file. You going to even make sure this is the case, that there is no circular dependency, because it makes (in cases when a language allows to make you one) code harder to read. In case of inheritance, those games with calling everything around is just normal state of things.
Of course, example in the article is small and limited, because pulling a monster from somewhere is not going to make it more approachable, but surely you’ve seen this stuff in the wild.
You might do that, in the same way that you might carefully document your invariants in a class that allows inheritance, mark methods private/final as needed, etc. But you also might not do that. It sounds a bit as if you’re comparing well-written code without inheritance to poorly written code with it.
Not that there isn’t lots of terrible inheritance based code. And I’d even say inheritance, on balance, makes code harder to reason about. However, I think that the overwhelming issue is your ability to find good abstractions or ways of dividing up functionality–the choice of inheritance vs. composition is secondary.
It’s just that without inheritance it’s easier to make good abstractions. Inheritance affords you to do wrong thing easily, without any friction - just read a good article about that few weeks ago.
Interesting article from Carmack about inlining everything:
http://number-none.com/blow/blog/programming/2014/09/26/carmack-on-inlined-code.html
Not really, including code via accessing a class or object member forces you to manually go figure out which implementation is used, or where the implementation in a web of nested namespaces. In the case of function, each symbol is non-ambiguous. This is a big deal. If you have types A and B, with A having an attribute of the type B, each of these types containing a 3 level hierarchy, and you call A.b.some_b_method(). That could be defined in 9 different places, and if it is, you need to figure out which that symbol resolves to. This is a real problem.
Yeah, all code should be in a single file anyway. No more chasing of method definitions across multiple files. You just open the file and it’s all there…
Any form of modularity should be there to raise the level of abstraction. ie. Become a building block, that is solid (pun intended) firm and utterly reliable, that you can use to understand the higher layers.
You can peer inside the building block if you need to, but all you need to understand about it, to understand the next level up, is what it does, not how it does it.
Inheritance is there to allow you to know that “all these things IS A that”. ie. I can think of them and treat all of them exactly as I would treat the parent class. (ie. The L in SOLID)
I can utterly rely on the fact that the class invariant for the superclass holds for all subclasses. ie. The subclasses may guarantee other things, but amongst the things they guarantee, is that the super class’s class invariant holds.
I usually write a class invariant check for every class I write.
I then invoke it at the end of the constructor, and the beginning of the destructor, and at the start and end of every public method.
As I become more convinced of the correctness of what I’m doing, I may remove some for efficiency reasons. As I become more paranoid, I will add some.
In subclasses, the class invariant check always invokes the parent classes invariant check!
I wrote a response to this article here, which focuses on the historical perspective.
Sather is an Eiffel derivative that use contravariance for function arguments,
The Differences Between Sather and Eiffel gives a summary of the changes.
An interesting read, thank you very much.
I’m not familiar with “proper modules that encourage code specialization”. Can you elaborate more on that? What are they exactly? What are the current languages that have them?
You’ll want to look at how the ML family (excluding F#) does parameterized modules. You can import a module into another module and specialize it in the process, say by refining the type signatures. A few spec languages also use this, as it’s important to refinement proofs, but relatively few programming languages.
Thanks! It was really interesting to read.
Great read. To the point.
The part about django hits the nail in the head. Frameworks are the plague, libraries are awesome. Whenever I read the word ‘framework’ I run! The idea that the application is written by framework authors and the user just places chunks of their own code here and there is just not sane. Whenever the user needs to do something that doesn’t fall into that workflow, then they are clueless and solve it by including another ‘framework’.
I see all of those people complaining about web frameworks, so I’d like you to show me, how do you do a “web library”. That is, a library, that is made for creating web servers, without requiring the user to reimplement the framework themselves.
These are the things web frameworks provide:
Additionally, they often hide the main listening loop from you.
There is no reason for these to live inside a bundle called framework. One can implement these separately and mix and match.
Then you call write the listening loop yourself, or if you prefer to use a standardized application format such as WSGI, servlets, etc. you can bring in a plugable compatible web server library. Or you use a web server library, typically it comes with a method of function to start the main loop.
Some examples:
Java, python, and probably other languges, come with a standard api to write web applications. If you implement your application using those apis, you can then use a plugable server implementation. You don’t need a framework. Indeed most frameworks in these languages are built on top of these.
https://www.python.org/dev/peps/pep-0333/ https://en.wikipedia.org/wiki/Java_servlet
The usual definition of framework vs library is that your code calls the library code, and framework code calls your code. Notably the most useful feature of web frameworks, is the one that makes them frameworks - it’s the URL dispatcher. A lot of web frameworks don’t even have an ORM, or “pipping among your controllers/views/models/whatever-they-call-it”.
You can trivially do an url dispatcher that you have to call with an url (and other details of a request) and it gives you back a function or some identifier, so you can call it or something. That would be a library, right? Clojure’s web stack is built like that.
Now question - why don’t let the framework have this piece of code. What else would you want to do with the returned function than to call it? I don’t see the usefulness of excluding that tiny piece of code.
Well the thing is that if you just call that function then you’ve certainly have no other options. In case of a library you can supply identifiers instead of functions. You could have an async call instead of just calling a function. You could supply classes (ugh) there. You could send results of dispatch somewhere downstream - say, to Kafka, for, again, async processing.
Lots of options, because you’re deciding what to do. In case of framework you have no hand in decisions.
I see no point to. Why couldn’t the framework just handle async functions? Why couldn’t the framework have class based views? What’s the point of sending results from there - many frameworks have so called middleware, which integrates cleanly and is composable. Like, I don’t see any point to it, besides being able to say, “Only my code calls my code”, which is a lie anyways, since it’s the OS that calls your code in the first place, and in many languages, “main()” isn’t actually the first thing run. Libraries have their place, but there are places where frameworks are just simply better.
It could, until a new need arises!
Yes, they are good when you don’t yet know how to do things. They allow you to start without learning too much. When you have enough knowledge, they are crutches.
How often do you add a new way to call your code? If it’s more often than your scheduled project rewrite, it’s too often.
Then why do experienced developers make and use them? Probably because they went past “I don’t like it when I don’t know how absolutely everything is working over the hood” phase. Frameworks are fit to make code that handles requests. Be it network requests, UI requests, etc. Libraries are fit to answer requests. When using a framework, you are just writing another library.
Why do experienced developers use inheritance? Does not make sense to me either.
Probably because they understand that when used properly it is a powerful tool. I think the problem with inheritance in modern languages is that it is too easy to use. It is too powerful to be used with simple syntax, IMO it should involve some arcane manipulations directly to the object model of the language.
How often do you schedule rewrites? What’s the point for scheduled rewrites?
Lots and lots of experienced developers do not make frameworks and do not use them.
Every two to four years. It happens naturally anyways in companies. You bring your tech stack up to date and learn from mistakes that you did the last time, while also integrating that hodge-podge feature that you did for that one client cleanly and reuseably.
I don’t know about you, but I don’t really see them in fields where using frameworks makes sense, such as web backends, UI, etc.
I’m building a product with a small team (I’m co-founder and CTO) and to me scheduling rewrites each 2-4 years sounds insane. I think we probably would just go bankrupt if we tried to do a rewrite for… for what?
Maybe I don’t understand what do you mean by “rewrite”. To me, rewrite is “let’s init a blank repository and start from scratch, but this time better than before, using new shiny technologies”.
It is probably more gradual with small teams, but it still does happen. Probably even more often, with small teams, since as the team expands more limitations are found with previous architecture.
I wouldn’t call that the most useful feature at all as that would be pretty trivial to implement when compared with, for example, request parsing.
The most ubiquous and usual, certainly yes.
But there is no reason why this needs to be provided as a framework other than an existing crowd of web developers that are not aware that all you would need is a library.
Take flask for example, most people define request handling functions wrapped In an annotation and don’t ever call it manually. The definition acts as the dispatching rule, as per rules provided in the decorators. But there is no reason whatsoever why this is better than having a function that binds an URL rule to an handler function. In fact that is exactly all that happens in the decorator implementation. It just calls flask.add_url_rule(). You can do this yourself instead of using the decorators and get rid of the silliness of having a dependency calling your code rather than the other way around. If you check your favourite framework URL dispatcher, you would be surprised how simple they are.
Not having decorators doesn’t change flask from being a framework to being a library. It’s now just a framework without syntactic sugar.
The reference to inversion of control toward the end is confusing to me. IoC != Inheritance. While I generally agree that inheritance is often not the way to go, IoC as a concept is vital to any large codebase where long-term maintainability is a priority.
You are right, these are two different tools, but IoC makes stuff convoluted and should be used very sporadically. There are similarities between them in regards of indirection of code evaluation.
In the end inheritance is worse because it makes it easy to make wrong thing, and IoC is a lot of friction and because of that is used rarely.
I think there’s a probably a place for inheritance, but I haven’t written a subclass in years. Working in golang without inheritance makes it really clear that there are two use cases for inheritance: overrides and callbacks. Overrides in go via struct embedding aren’t really dynamic dispatch so I find them very easy to reason about.
Callbacks on the other hand are imo better handled by members that are functions or interface values.
You should check out Go! :)
why? :)
no inheritance
Ah right. :-) I’m using Clojure though. This post was prompted by finding that Turbolinks had been rewritten in heavily-OOP TypeScript and it was just impossible to find what I was looking for. On the contrary, in their old codebase, called now turbolinks-classic, I found that code in a few minutes.
true, but at the same time…turbolinks works a lot more often (for me, anyway) these days than it used to.
Not just “no inheritance”, “no” a lot of other things too.
[Comment from banned user removed]
It does have objects but uses composition instead of inheritance for code reuse.
https://pyxis.nymag.com/v1/imgs/d6a/dc7/4a5001b7beea096457f480c8808572428b-09-roll-safe.rsquare.w700.jpg
I feel like this post is missing the closure. How would you reorganize this specific code in Python, without the inheritance? How would the solution look in Clojure?
I would just do a protocol and implement it. And in Python I would do the same - an abstract base class and then implement it. Not much savings in code anyway!
To avoid the next OOPs. Say you have some command
X
. Ideally you’d like to predict whatX
does before you give it a go. This is precisely what type information (except in your language) does.And Python does not even use inner methods, only super… :-)
With Python you can destroy the reality. :-))
Of course, all methods are virtual and binding is late. But do check this: overment in Racket OOP :wink:
“I hear your band are selling their turntables and buying guitars. … I hear your band are selling their guitars and buying turntables.” —LCD Soundsystem