Ultimately things are more composable when they make fewer assumptions about their environment and return maximal information about their result. A function that satisfies this idea could be considered “information maximal” and in principle an information maximal function has maximal opportunities to be used elsewhere.

On the other hand, without some ideas of information forgetting (substitutability being just one) it can be difficult to make use of information maximal functions. You end up speaking concretely all of the time and creating rigid boundaries between things. Either you end up rigid or you turn to “big hammers” to smash through these rigid interfaces (like interface {} and coercion). This is a loss because now the composability of your program depends upon implicit knowledge and assumption instead of explicit (e.g., communicated/documented) knowledge and assumptions.

You’ve made your world a little more magical in the worst way.

Ultimately things are more composable when they make fewer assumptions about their environment and return maximal information about their result.

Assumptions about the environment are often key. Area2: Rectangles -> Float may be a subtype of Area1:Squares -> Float but I know Sqrt(Area1)(x) will not approximate an irrational .

I’m not sure I follow your argument, could you be more explicit? It seems like you’re saying that you can point out a situation where a certain technique is not valuable and are therefore saying that the technique is not valuable. But certainly there are many situations where things become more composable by knowing less? Take the compose function for example which has a type of ('a -> 'b) -> ('c -> 'a) -> 'c -> 'b) Certainly this is more composable than having to know what each of those type variables are?

Also, Square being a subtype of Rectange seems like a commonly understood anti-pattern from my reading so your example seem like a poor one, perhaps you have a better?

Yes, and when you pass in x you work with all of those assumptions. Otoh, that subtyping behavior of Area1 and Area2 is great: we don’t need to know w and h are the same to compute an area.

I’m going to assume you have at least a passing understanding of how Go interfaces work since you mentioned them. Can you give me an example to illustrate your claim?

Interfaces are a perfect example. They introduce “structural subtyping” in that any type which satisfies the interface is a subtype of that interface. Thus, a function func X(a SomeInterface) bool can be “upcast” to one of type func X(a SomeType) bool whenever SomeType instantiates SomeInterface.

Taking SomeInterface is specific to the assumptions/information you actually need for the function. We can forget these specificities if we want.

Are you asking why class-based programming with subclassing is useful? Most CS undergrad programs in the US cover this for at least a semester (using Java), so I’m not used to seeing this question. There’s not really a short answer to this; it’s just a reasonable useful paradigm.

For folks who (like me) are unfamiliar with the reference:

When Albert Einstein was making the rounds of the speaker’s circuit, he usually found himself eagerly longing to get back to his laboratory work. One night as they were driving to yet another rubber-chicken dinner, Einstein mentioned to his chauffeur (a man who somewhat resembled Einstein in looks & manner) that he was tired of speechmaking.

“I have an idea, boss,” his chauffeur said. “I’ve heard you give this speech so many times. I’ll bet I could give it for you.” Einstein laughed loudly and said, “Why not? Let’s do it!” When they arrive at the dinner, Einstein donned the chauffeur’s cap and jacket and sat in the back of the room. The chauffeur gave a beautiful rendition of Einstein’s speech and even answered a few questions expertly.

Then a supremely pompous professor asked an extremely esoteric question about anti-matter formation, digressing here and there to let everyone in the audience know that he was nobody’s fool. Without missing a beat, the chauffeur fixed the professor with a steely stare and said, “Sir, the answer to that question is so simple that I will let my chauffeur, who is sitting in the back, answer it for me.”

That joke is an example of joke reuse - to stay on topic sort of. I know it was at least used in the 1700s by Hassidic Jews about some great Rabbi. It probably dates back to Babylon though.

Not related in any way, expect that it reminded me of this one:

The pope got really curious about secular life once. He was especially excited about the idea of driving, since his chauffeur seemed to be a stand up guy who really enjoyed his work. So one day he proposed that he be allowed to drive the car for a little bit just to get a feel for it. The chauffeur agreed and selected a lonely road in the middle of no where. He explained the basics to the pope and settled himself in the back of the limousine while the pope drove off. In a little bit his holiness, getting the hang of things, started to put his foot down and pretty soon a cop on a motor bike flagged them down.

As the cop came round to the car and the pope rolled down the window, the officer started. He looked wide-eyed at the pope then peered discreetly and shyly at the reclusive figure in the back. He walked uncertainly back to his motor bike and was heard raising head quarters.

“Yes sir. I stopped them for speeding. I don’t know who he is, but the Pope is his driver”

I didn’t mean to be dismissive, it’s just that to answer a question like that you have to know the asker’s background. If someone asks why functional programming is useful, you can assume they have a standard imperative background and work from there. If someone asks why subclassing is useful, I haven’t the slightest idea what their background is. Almost everyone learns that in undergrad, and apparently the asker did as well but decided to ask the question anyway. Even if you don’t particularly like class-based inheritance (I don’t), I think almost everyone knows why it’s useful.

Yeah, I’m familiar with how subclassing works, having both a CS degree and used Java for years. But it was a bad idea that got widespread use. Most real world problems don’t naturally lend themselves to being modeled via inheritence we it ends up being abused for code reuse.

Inheritance is “reuse of a factory”. Subclassing is “shared code for operations”. Subtyping is actually a great concept—it’s “information theoretic order relations” and is super useful.

The problem is that so many languages have decided to implement subtyping via subclassing and then fully conflate the two (and thus—even worse—conflate inheritance and subtyping).

There’s really no inherent relationship. It’s a design decision on behalf of some languages and it’s, as far as I’m concerned, the heart of the “bad ideas” here.

So, again, all said and done, subtyping is perfectly nice. No reason not to talk about it.

Agreed, the conflation between subtyping and inheritance is the problem in Java et al.
But Go lets you do implicit subtyping (e.g. for use in polymorphism) via interfaces in a way that sidesteps the worry about getting explicit subtyping correct.
http://hackthology.com/object-oriented-inheritance-in-go.html has a good explanation.

Admittedly, if you actually do need to model explicit subtyping then Go isn’t great at it. But in practice I’ve seen very few cases where this is an issue.

Sure, and I was mostly responding to your mention of inheritance. Structural subtyping is great!

Generally, I think user-defined nominal subtyping lattices are all broken. Stephan Dolan’s recent PhD thesis is the only sensible form of that I’ve ever seen—it has the principle typing property!

Oh, I see. Agreed that structural subtyping is great. But it seems like you need not agonize about covariance & contravariance when using it. Am I missing something there?

This is an illustration of why I find “type theory” so uncompelling. What is proved here? Why is it interesting?
(Animal -> Greyhound) <: (Dog -> Dog) means what? If f: Animal -> Greyhound, then f(pig) is defined, but clearly pigs are not dogs so other than being able to misuse some terminology from category theory, what have we learned?

Added: Suppose f: X -> Y and g: Z -> Q. If X is a subset of Z and Q is a subset of Y, then any expression involving G(f) can be replaced by G(g) with the properties that if G(f)(a) is defined, then G(g)(a) is defined (since any argument a that is defined for f must be defined for g and since any value in the image of g is a value in the image of f. Hooray. No category theory needed, but on the other hand, still nothing non-trivial learned. In fact, as decades of silly papers about the Liskov principle have shown, a program attempting to make use of this property of subtyping can fail easily because we are generally interested in properties more complex than set membership.

It means that if you expect a function Dog -> Dog then possessing a function Animal -> Greyhound is good enough. It doesn’t mean what you implied about pigs—to do that is to invert the whole idea of substitutability/variance.

So I guess the whole point of the post is to explain why what you suggest is wrong and it apparently didn’t succeed.

I had two points: both poorly expressed, I’m sure. The first is that what is so laboriously expressed using terms copied from category theory is a trivial property of sets. If f: X -> Y and g: Z -> Q and X is a subset of Z and Q is a subset of Y then P(g(x) will be defined if P(f(x)) is defined. Amazing! I showed the same thing using concepts of elementary school algebra - David Parnas once described Milner agebras as a complex way of restating primary school sets and functions, i had the same thought here. And my second point was that this is not a particularly interesting fact for programming because , as shown in the endless literature on Liskov substypes, set membership is often conflicted and complicated and does not carry the semantic information that one might assume.

Covariance and contravariance are mainstream CS terms, the kind of language you’d expect to read in a book about Java. Not sure why you are bringing up category theory, other than to trigger reflexive disgust in people who don’t know any better.

It’s good that you find this easy. It would be a problem if someone who has read “endless literature” on the Liskov substitution principle didn’t find a 101-level description of subtyping to be beneath them.

I’m aware but that isn’t relevant to my point. These particular terms have broken out of those fields and entered the mainstream CS consciousness. It’s like referencing Diophantine analysis to argue that multiplication is esoteric.

For better or worse, that’s the standard terminology. Who cares? You don’t need to know anything about category theory to understand the concepts of contra/co-variance.

Also, depending on the type system, finding a set-theoretic semantics for subtyping can be highly non-trivial. For example, try to construct a set-theoretic model for the types generated by the grammar τ = ∀a. τ | τ → τ | a where the forall quantifier can be instantiated impredicatively.

example, try to construct a set-theoretic model for the types generated by the grammar τ = ∀a. τ | τ → τ | a where the forall quantifier can be instantiated impredicatively.

So we have now gone from describing trivial concepts as if Saunders MacLane was explaining category theory to describing things that have absolutely no meaning at all in programming.

people use this jargon to express elementary properties of sets

and

“set” comes from mathematics. My comment is that I don’t see the utility of bringing such language into a discussion of something so trivial

My point here is that subtyping cannot easily be understood in terms of set membership, and provided a simple and important (both theoretical and practical) counterexample. You’re the one who brought up set theory, so you can’t harp on me for continuing your train of thought.

My point was that if I can explain “subtype” using elementary sets from primary school, there is no point in using terms like “contravariance”. And when we explain “subtype” using basic math, we see it to be a pretty thin idea. I’m all for using mathematics and the language of mathematics to understand CS concepts, but dubious about the dressing up of simple ideas in opaque language. I’m far from an expert in category theory, but it’s obvious that Maclane/Birkhoff are getting at some interesting properties of mathematical systems when they construct an apparatus for translating between e.g. noetherian rings and topology. But what are we doing here? I don’t see the utility and, there are, as far as I know, zero results that are not self-referential. That is, using category theoretic mechanisms, mathematicians prove substantive theorems about concrete domains. But the results in computational type theory, seem to be about computational type theory and trivial properties of systems.

My point was that if I can explain “subtype” using elementary sets from primary school, there is no point in using terms like “contravariance”. And when we explain “subtype” using basic math, we see it to be a pretty thin idea.

It’s a 101-level definition, not groundbreaking research! This is like when you claimed that HM–a 50 year old, introductory type system–was trivial. It’s supposed to be for anyone with even the most cursory understanding of the subject. And once again, you have ignored that you can’t easily talk about subtyping in terms of sets. You are convinced that you have a solid enough understanding of these concepts to make sweeping proclamations about the entire field, but any correction of your gets me a “whoa there MacLane”

But what are we doing here? I don’t see the utility and, there are, as far as I know, zero results that are not self-referential.

What results are you aware of? Were you aware that for example, that it is undecidable to check if two types are in a subtype relation in the presence of impredicative parametric polymorphism and contravariance (the example I gave above)? That’s a real, practical issue that people developing real world type systems with subtyping have to work around.

I don’t get it. I really try to respond to people in good faith here but I am not convinced you actually know anything about type theory. So why do you aggressively troll this forum about how it is useless? Is there anything that could convince you otherwise?

I am sorry to be causing such distress. Not my intention. I dont get the utility of your example - but I’m not all that sure I even think generics are a good idea in practice.

Is it really so astonishing? Considering that python, javascript and other dynamically typed languages are so popular I think you are exaggerating a little bit.

“variable” comes from mathematics as well but it’s a common-place term in computer science and software engineering today. Covariant and contravariant are less ubiquitous but they do come up in general discussion of something like Java, such as the semantics around arrays.

“set” comes from mathematics. My comment is that I don’t see the utility of bringing such language into a discussion of something so trivial - except that it makes this trivial concept sound important. The justification that other people also engage in use of mystification doesn’t address my point.

Giving a concept a name, regardless of its triviality, is fairly natural. It allows one to have precise discussions rather than using the full explanation each time or hoping each other have the same understanding in mind.

Are you saying that giving a concept a name should only be done for concepts you determine are complicated enough to necessitate a name?

This is an illustration of why I find “type theory” so uncompelling.

Do you also find type safety uncompelling? Or is it simply the case that you do not fancy a more elaborate type systems than what C offers?

What is proved here? Why is it interesting?

This article is not a proof. This article explains some terminology, with clear examples. The terminology might or might not be interesting, but the concept is interesting because it answers what type of functions are subtypes of another.

(Animal -> Greyhound) <: (Dog -> Dog) means what?

It means that (Animal -> Greyhound) is a subtype of (Dog -> Dog). Which means, if you need a function that takes a Dog and outputs a Dog, you can safely use a function that takes an Animal (it will accept dogs) and outputs a Greyhound (which is a dog).

If f: Animal -> Greyhound, then f(pig) is defined, but clearly pigs are not dogs so

It sounds like you misunderstood the example. f is of type (Dog -> Dog) -> T, so it wants some function that is a subtype of (Dog -> Dog). You can’t give it a Pig. g might be capable of accepting Pigs, but that is of no concern for f, which only requires that g can take Dogs and gives back Dogs.

I think it’s a trivial problem. Why is it interesting to know that a function type is a subtype with that definition? imagesOnInternet -> Dogs is a subtype of Dogs->Dogs and so is Animals->Greyhound. So what?

The point I was attempting to make is that modifying the pre-image may be more semantically interesting than subtype conformance. In many cases, in actual programming, it is an error for a program to accept certain inputs.

(Animals->Greyhound) is not a subtype of (Dogs->Dogs). And neither is (imagesOnInternet->Dogs.)

(Dogs->Dogs) is a subtype of (Animals->Greyhound) maybe that’s what you meant?

But there is no subtype relationship in either direction between (Dogs->Dogs) and (imagesOnInternet->Dogs) so I have no idea what you were trying to say there.

I agree with you that makes more sense, but in type theory the set of maps X->Y is a subtype of A->B when Y is a subset of B and X is a SUPERSET of A. This is explained in the OP.

One nice thing about Go is that you don’t need to worry about this.

[Comment removed by author]

What are the advantages of using it?

Composability and reusability.

Ultimately things are more composable when they make fewer assumptions about their environment and return maximal information about their result. A function that satisfies this idea could be considered “information maximal” and in principle an information maximal function has maximal opportunities to be used elsewhere.

On the other hand, without some ideas of information forgetting (substitutability being just one) it can be difficult to make use of information maximal functions. You end up speaking concretely all of the time and creating rigid boundaries between things. Either you end up rigid or you turn to “big hammers” to smash through these rigid interfaces (like

`interface {}`

and coercion). This is a loss because now the composability of your program depends upon implicit knowledge and assumption instead of explicit (e.g., communicated/documented) knowledge and assumptions.You’ve made your world a little more magical in the worst way.

Assumptions about the environment are often key. Area2: Rectangles -> Float may be a subtype of Area1:Squares -> Float but I know Sqrt(Area1)(x) will not approximate an irrational .

I’m not sure I follow your argument, could you be more explicit? It seems like you’re saying that you can point out a situation where a certain technique is not valuable and are therefore saying that the technique is not valuable. But certainly there are many situations where things become more composable by knowing less? Take the compose function for example which has a type of

`('a -> 'b) -> ('c -> 'a) -> 'c -> 'b)`

Certainly this is more composable than having to know what each of those type variables are?Also,

`Square`

being a subtype of`Rectange`

seems like a commonly understood anti-pattern from my reading so your example seem like a poor one, perhaps you have a better?The claim was that things are more composable when they make fewer assumptions about the environment (I´m guessing that means “input”).

Your response didn’t really clarify anything for me or directly address the question I asked.

Yes, and when you pass in

`x`

you work with all of those assumptions. Otoh, that subtyping behavior of Area1 and Area2 is great: we don’t need to know`w`

and`h`

are the same to compute an area.I’m going to assume you have at least a passing understanding of how Go interfaces work since you mentioned them. Can you give me an example to illustrate your claim?

Interfaces are a perfect example. They introduce “structural subtyping” in that any type which satisfies the interface is a subtype of that interface. Thus, a function

`func X(a SomeInterface) bool`

can be “upcast” to one of type`func X(a SomeType) bool`

whenever SomeType instantiates SomeInterface.Taking SomeInterface is specific to the assumptions/information you actually need for the function. We can forget these specificities if we want.

I followed what you said but it doesn’t sound like a problem so far…

Oh sorry—Go is mostly good here! It was just that someone had mentioned it so I mentioned Go’s big hammer: interface {}.

Are you asking why class-based programming with subclassing is useful? Most CS undergrad programs in the US cover this for at least a semester (using Java), so I’m not used to seeing this question. There’s not really a short answer to this; it’s just a reasonable useful paradigm.

Your answer reminds me of that joke about Einstein and his chauffeur.

For folks who (like me) are unfamiliar with the reference:

That joke is an example of joke reuse - to stay on topic sort of. I know it was at least used in the 1700s by Hassidic Jews about some great Rabbi. It probably dates back to Babylon though.

Not related in any way, expect that it reminded me of this one:

The pope got really curious about secular life once. He was especially excited about the idea of driving, since his chauffeur seemed to be a stand up guy who really enjoyed his work. So one day he proposed that he be allowed to drive the car for a little bit just to get a feel for it. The chauffeur agreed and selected a lonely road in the middle of no where. He explained the basics to the pope and settled himself in the back of the limousine while the pope drove off. In a little bit his holiness, getting the hang of things, started to put his foot down and pretty soon a cop on a motor bike flagged them down.

As the cop came round to the car and the pope rolled down the window, the officer started. He looked wide-eyed at the pope then peered discreetly and shyly at the reclusive figure in the back. He walked uncertainly back to his motor bike and was heard raising head quarters.

“Yes sir. I stopped them for speeding. I don’t know who he is, but the Pope is his driver”

I didn’t mean to be dismissive, it’s just that to answer a question like that you have to know the asker’s background. If someone asks why functional programming is useful, you can assume they have a standard imperative background and work from there. If someone asks why subclassing is useful, I haven’t the slightest idea what their background is. Almost everyone learns that in undergrad, and apparently the asker did as well but decided to ask the question anyway. Even if you don’t particularly like class-based inheritance (I don’t), I think almost everyone knows why it’s useful.

Yeah, I’m familiar with how subclassing works, having both a CS degree and used Java for years. But it was a bad idea that got widespread use. Most real world problems don’t naturally lend themselves to being modeled via inheritence we it ends up being abused for code reuse.

Inheritance is “reuse of a factory”. Subclassing is “shared code for operations”. Sub

typingis actually a great concept—it’s “information theoretic order relations” and is super useful.The problem is that so many languages have decided to implement subtyping via subclassing and then fully conflate the two (and thus—even worse—conflate inheritance and subtyping).

There’s really no inherent relationship. It’s a design decision on behalf of some languages and it’s, as far as I’m concerned, the heart of the “bad ideas” here.

So, again, all said and done, sub

typingis perfectly nice. No reason not to talk about it.Agreed, the conflation between subtyping and inheritance is the problem in Java et al. But Go lets you do implicit subtyping (e.g. for use in polymorphism) via interfaces in a way that sidesteps the worry about getting explicit subtyping correct. http://hackthology.com/object-oriented-inheritance-in-go.html has a good explanation.

Admittedly, if you actually do need to model explicit subtyping then Go isn’t great at it. But in practice I’ve seen very few cases where this is an issue.

Sure, and I was mostly responding to your mention of inheritance. Structural subtyping is great!

Generally, I think user-defined nominal subtyping lattices are all broken. Stephan Dolan’s recent PhD thesis is the only sensible form of that I’ve ever seen—it has the principle typing property!

Oh, I see. Agreed that structural subtyping is great. But it seems like you need not agonize about covariance & contravariance when using it. Am I missing something there?

Nope, totally still do for exactly the same reason: in negative type positions subtyping goes backwards.

Oh, I assumed you were being sarcastic … then I read the subsequent thread!

This is an illustration of why I find “type theory” so uncompelling. What is proved here? Why is it interesting? (Animal -> Greyhound) <: (Dog -> Dog) means what? If f: Animal -> Greyhound, then f(pig) is defined, but clearly pigs are not dogs so other than being able to misuse some terminology from category theory, what have we learned?

Added: Suppose f: X -> Y and g: Z -> Q. If X is a subset of Z and Q is a subset of Y, then any expression involving G(f) can be replaced by G(g) with the properties that if G(f)(a) is defined, then G(g)(a) is defined (since any argument a that is defined for f must be defined for g and since any value in the image of g is a value in the image of f. Hooray. No category theory needed, but on the other hand, still nothing non-trivial learned. In fact, as decades of silly papers about the Liskov principle have shown, a program attempting to make use of this property of subtyping can fail easily because we are generally interested in properties more complex than set membership.

It means that if you expect a function

`Dog -> Dog`

then possessing a function`Animal -> Greyhound`

is good enough. Itdoesn’tmean what you implied about pigs—to do that is to invert the whole idea of substitutability/variance.So I guess the whole point of the post is to explain why what you suggest is wrong and it apparently didn’t succeed.

I had two points: both poorly expressed, I’m sure. The first is that what is so laboriously expressed using terms copied from category theory is a trivial property of sets. If f: X -> Y and g: Z -> Q and X is a subset of Z and Q is a subset of Y then P(g(x) will be defined if P(f(x)) is defined. Amazing! I showed the same thing using concepts of elementary school algebra - David Parnas once described Milner agebras as a complex way of restating primary school sets and functions, i had the same thought here. And my second point was that this is not a particularly interesting fact for programming because , as shown in the endless literature on Liskov substypes, set membership is often conflicted and complicated and does not carry the semantic information that one might assume.

Covariance and contravariance are mainstream CS terms, the kind of language you’d expect to read in a book about Java. Not sure why you are bringing up category theory, other than to trigger reflexive disgust in people who don’t know any better.

It’s good that you find this easy. It would be a problem if someone who has read “endless literature” on the Liskov substitution principle didn’t find a 101-level description of subtyping to be beneath them.

[Comment removed by author]

I’m aware but that isn’t relevant to my point. These particular terms have broken out of those fields and entered the mainstream CS consciousness. It’s like referencing Diophantine analysis to argue that multiplication is esoteric.

So “a lot of people use this jargon to express elementary properties of sets” is the justification?

For better or worse, that’s the standard terminology. Who cares? You don’t need to know anything about category theory to understand the concepts of contra/co-variance.

Also, depending on the type system, finding a set-theoretic semantics for subtyping can be highly non-trivial. For example, try to construct a set-theoretic model for the types generated by the grammar τ = ∀a. τ | τ → τ | a where the forall quantifier can be instantiated impredicatively.

So we have now gone from describing trivial concepts as if Saunders MacLane was explaining category theory to describing things that have absolutely no meaning at all in programming.

Your claim is that parametric polymorphism (“generics”) and functions have absolutely no meaning in programming?

nope.

Then do you mind expanding? Elsewhere, you said:

and

My point here is that subtyping cannot easily be understood in terms of set membership, and provided a simple and important (both theoretical and practical) counterexample. You’re the one who brought up set theory, so you can’t harp on me for continuing your train of thought.

My point was that if I can explain “subtype” using elementary sets from primary school, there is no point in using terms like “contravariance”. And when we explain “subtype” using basic math, we see it to be a pretty thin idea. I’m all for using mathematics and the language of mathematics to understand CS concepts, but dubious about the dressing up of simple ideas in opaque language. I’m far from an expert in category theory, but it’s obvious that Maclane/Birkhoff are getting at some interesting properties of mathematical systems when they construct an apparatus for translating between e.g. noetherian rings and topology. But what are we doing here? I don’t see the utility and, there are, as far as I know, zero results that are not self-referential. That is, using category theoretic mechanisms, mathematicians prove substantive theorems about concrete domains. But the results in computational type theory, seem to be about computational type theory and trivial properties of systems.

It’s a 101-level definition, not groundbreaking research! This is like when you claimed that HM–a 50 year old, introductory type system–was trivial. It’s

supposed to befor anyone with even the most cursory understanding of the subject. And once again, you have ignored that you can’t easily talk about subtyping in terms of sets. You are convinced that you have a solid enough understanding of these concepts to make sweeping proclamations about the entire field, but any correction of your gets me a “whoa there MacLane”What results are you aware of? Were you aware that for example, that it is undecidable to check if two types are in a subtype relation in the presence of impredicative parametric polymorphism and contravariance (the example I gave above)? That’s a real, practical issue that people developing real world type systems with subtyping have to work around.

I don’t get it. I really try to respond to people in good faith here but I am not convinced you actually know anything about type theory. So why do you aggressively troll this forum about how it is useless? Is there anything that could convince you otherwise?

I am sorry to be causing such distress. Not my intention. I dont get the utility of your example - but I’m not all that sure I even think generics are a good idea in practice.

That’s an astonishing position to hold in the 21st century.

Is it really so astonishing? Considering that python, javascript and other dynamically typed languages are so popular I think you are exaggerating a little bit.

Dynamically typed languages are a great example of why generics are important!

I find them unaesthetic.

So what non-jargon terms would you suggest instead?

“variable” comes from mathematics as well but it’s a common-place term in computer science and software engineering today. Covariant and contravariant are less ubiquitous but they do come up in general discussion of something like Java, such as the semantics around arrays.

“set” comes from mathematics. My comment is that I don’t see the utility of bringing such language into a discussion of something so trivial - except that it makes this trivial concept

soundimportant. The justification that other people also engage in use of mystification doesn’t address my point.Giving a concept a name, regardless of its triviality, is fairly natural. It allows one to have precise discussions rather than using the full explanation each time or hoping each other have the same understanding in mind.

Are you saying that giving a concept a name should only be done for concepts you determine are complicated enough to necessitate a name?

Do you also find type safety uncompelling? Or is it simply the case that you do not fancy a more elaborate type systems than what C offers?

This article is not a proof. This article explains some terminology, with clear examples. The terminology might or might not be interesting, but the concept is interesting because it answers what type of functions are subtypes of another.

It means that (Animal -> Greyhound) is a subtype of (Dog -> Dog). Which means, if you need a function that takes a Dog and outputs a Dog, you can safely use a function that takes an Animal (it will accept dogs) and outputs a Greyhound (which is a dog).

It sounds like you misunderstood the example. f is of type (Dog -> Dog) -> T, so it wants some function that is a subtype of (Dog -> Dog). You can’t give it a Pig. g might be capable of accepting Pigs, but that is of no concern for f, which only requires that g can take Dogs and gives back Dogs.

I think it’s a trivial problem. Why is it interesting to know that a function type is a subtype with that definition? imagesOnInternet -> Dogs is a subtype of Dogs->Dogs and so is Animals->Greyhound. So what?

The point I was attempting to make is that modifying the pre-image may be more semantically interesting than subtype conformance. In many cases, in actual programming, it is an error for a program to accept certain inputs.

(Animals->Greyhound) is

nota subtype of (Dogs->Dogs). And neither is (imagesOnInternet->Dogs.)(Dogs->Dogs) is a subtype of (Animals->Greyhound) maybe that’s what you meant?

But there is no subtype relationship in either direction between (Dogs->Dogs) and (imagesOnInternet->Dogs) so I have no idea what you were trying to say there.

I agree with you that makes more sense, but in type theory the set of maps X->Y is a subtype of A->B when Y is a subset of B and X is a SUPERSET of A. This is explained in the OP.

I got the exact opposite of the OP’s article. I’ll have to go read it again to see if miss read it.