For example, in Rust it’s not all-or-nothing. You can write Vec<_> to hint it’s a vec, but omit the type of its element (useful for the collect() method that can build any container, but needs to know which). OTOH std::vector<auto> is not legal AFAIK.
In Rust type inference is function-wide, so it often can work out the type “backwards”:
let mut a = Vec::new();
a.push("string"); // now we know it's a Vec of strings
while in C++ auto is very local, and requires an explicit initializer (and you might end up with std::initializer_list instead of the type you wanted).
Rust also doesn’t have function overloading and implicit type conversions, so the results of the inference are more certain.
There is one huge difference: Rust is memory-safe, so if the compiler gets it wrong, it can only result in a compiler error or incorrect runtime logic. If the compiler gets it wrong in C++, you can wind up with a reference where you expected an owned value to be, and woo boy dangling dereference.
Rust took its type inference from Haskell right (Hindley-Miller)? Looks like you can hack your way to get to something close to overloading using traits, but it’s an ugly hack. I guess ad-hock overloading could slow down the compiler significantly.
What I always found funny is that even in languages like C with “no type inference,” the nested expressions still have implicit, inferred types. Like in this case:
int i = some_function(some_other_function(1 + 3) & 0xF0);
// \ what's some_function's arg? /
// Of course, you can figure it out by looking at some_function's signature,
// but if you're willing to look at some_function's signature for its argument type,
// then why aren't you willing to look at some_function's signature for its return type?
To be fair though, there’s no overloading in C. Once you know what the function returns, you can count on that.
In C++, the return type could be dependent on an overload from one of the arguments. And its type could be from a previous call that was also overloaded, etc.
This is a tricky spot where auto is either amazing or annoying: if those types are wishy-washy because it’s templated, auto is great and perhaps one of the few ways to easily express those interdependent types. If it’s not, it might leave someone temporarily confused while they figure out the overloads.
I don’t think it is exclusive to C++, in C# country I’ve seen analogous debate (2008) around the var keyword. You may be on to something with regards to “disliking change”, like C++, C# didn’t always have this feature. I think the other languages you cite were all “born” with type deduction/type inference.
The debate is extremely common w.r.t the use of the var inference keyword in Java circles as well. The article is probably right, you’d likely see it anywhere that type inference was introduced after the fact and was unfamiliar to some of the userbase who had never used a language with it previously.
I’m one of those old C++ programmers (using C++ since the 98 standard) who just loves auto. I use auto for everything. But i’ve had this argument too with other programmers. I actually think it’s not about fear of change but fear of “not knowing”. Some people fear “not knowing” what the type of a variable is and these are people who despise dynamic languages. Since auto makes the language FEEL dynamic, they don’t like it. It’s possible C++ has MORE of those people than in other languages.
I am hoping concepts will provide a bit of a middle ground here. Allow people to say “thing like variable” without having to be explicit about the type.
But in the end, it’s about fear of not knowing. Which is crazy because the use of auto is type safe, so nothing to fear there. Don’t be afraid! program fearlessly!
I think that’s valid, but there is another aspect to it, which is to do with being able to understand the code. Explicit type annotations can make it easier in some cases, for example if you are trying to see where a particular type is used. Even in languages like Haskell and Elm, it’s recommended to annotate top level functions, even though their types are inferred.
Having to work to tell a computer what it already knows is one of my pet peeves.
A type is not only for the computer, it’s also for the human reading the source. The more you leave to the computer to workout on its own, the more the human reading you will have to hunt for that now hidden information.
I also believe that wanting to know the exact type of a variable is a, for lack of a better term, “development smell”, especially in typed languages with generics.
Why?
I think that the possible operations on a type are what matters, and figuring out the exact type if needed is a tooling problem.
Yes, tooling can overcome the issue described above (will have to hunt for that now hidden information), but it is not explaining why it is preferrable. What is gained by doing so?
A type is not only for the computer, it’s also for the human reading the source
It depends. It doesn’t really matter if myfunc returns a std::vector or std::list if all I’m doing is filtering the results. It does matter however that the type has .cbegin() and .cend() iterators.
Why?
Because of what I wrote just afterwards: “I think that the possible operations on a type are what matters”.
but it is not explaining why it is preferrable. What is gained by doing so?
Personally, I hardly ever care what type something is unless it’s terribly named. Otherwise it’s getting passed to another function/algorithm anyway. Even if I did know the type, I’d more likely than not have to jump to its definition to find out / remember what it is and what I can do with it.
As for what is gained: refactoring. With auto I don’t have to change all the variable declarations. There’s also the avoidance of implicit conversions.
If the code-base is already good enough, that all implementations of an interface are interchangeable (no side-effect specific to a type that one would have to worry about), wouldn’t the refactoring already be possible mostly by tools, with minimal additional work to verify and touch-up possible artifacts?
You write that you wonder why this is a debate at all. But then, type inference is a “heavy” feature in a language and reduce code legibility. It has to demonstrate a clear advantage for such debate to be settled.
C++ is too bloated for me to use anyway, and this is something that I would suspect of most languages implementing type inference. There is still no clear advantage to having it (I mean, it won’t really come into consideration when choosing a language to work with).
Here’s a way to settle the debate: remove local type inference from a language that has it, and see how users react.
Of course, you’ll have to find a language designer willing to do such a thing, but then you’ll see if e.g. legibility is actually an issue: if people complain that adding local type inference reduces legibility (as you say), and just as many people complain that removing it also reduces legibility, then you may be able to make a claim as to whether “legibility” is a subjective property.
wouldn’t the refactoring already be possible mostly by tools, with minimal additional work to verify and touch-up possible artifacts?
Even so, who wants large diffs for no reason?
But then, type inference is a “heavy” feature in a language and reduce code legibility
Which seems to be the point of the people who prompted me to write the post in the first place. I understand that that’s your opinion, and it’s one I don’t agree with. What’s odd is that I’m not the only one - there are many languages where this debate doesn’t seem to happen.
C++ is too bloated for me to use anyway, and this is something that I would suspect of most languages implementing type inference
It depends on your definition of bloated. Would C be bloated if all we did was add type inference to it?
There are a few cases where auto is not preferable.
Using auto for loop indicies when testing against a size_t: all of the compilers I’ve used don’t try to match the signedness of the variable being tested against in the loop, generating sign mismatch warnings.
In hot sections of code (performance-wise), auto tends to cloud my ability to see what the true cost of things are. I’m not sure if that’s a familiarity thing, or it’s just one abstraction too far when needing to know exact details.
Similarly, I’ve been bitten by differences in what auto is, especially around references.
I suspect there’s a steeper learning curve than originally anticipated here, but it is nice to have available for most code.
Perhaps it’s a quality of implementation issue?
For example, in Rust it’s not all-or-nothing. You can write
Vec<_>
to hint it’s a vec, but omit the type of its element (useful for thecollect()
method that can build any container, but needs to know which). OTOHstd::vector<auto>
is not legal AFAIK.In Rust type inference is function-wide, so it often can work out the type “backwards”:
while in C++
auto
is very local, and requires an explicit initializer (and you might end up withstd::initializer_list
instead of the type you wanted).Rust also doesn’t have function overloading and implicit type conversions, so the results of the inference are more certain.
There is one huge difference: Rust is memory-safe, so if the compiler gets it wrong, it can only result in a compiler error or incorrect runtime logic. If the compiler gets it wrong in C++, you can wind up with a reference where you expected an owned value to be, and woo boy dangling dereference.
Rust took its type inference from Haskell right (Hindley-Miller)? Looks like you can hack your way to get to something close to overloading using traits, but it’s an ugly hack. I guess ad-hock overloading could slow down the compiler significantly.
What I always found funny is that even in languages like C with “no type inference,” the nested expressions still have implicit, inferred types. Like in this case:
To be fair though, there’s no overloading in C. Once you know what the function returns, you can count on that.
In C++, the return type could be dependent on an overload from one of the arguments. And its type could be from a previous call that was also overloaded, etc.
This is a tricky spot where
auto
is either amazing or annoying: if those types are wishy-washy because it’s templated, auto is great and perhaps one of the few ways to easily express those interdependent types. If it’s not, it might leave someone temporarily confused while they figure out the overloads.Java specification even calls it type inference:
https://docs.oracle.com/javase/tutorial/java/generics/genTypeInference.html
Because it is!
I don’t think it is exclusive to C++, in C# country I’ve seen analogous debate (2008) around the
var
keyword. You may be on to something with regards to “disliking change”, like C++, C# didn’t always have this feature. I think the other languages you cite were all “born” with type deduction/type inference.The debate is extremely common w.r.t the use of the
var
inference keyword in Java circles as well. The article is probably right, you’d likely see it anywhere that type inference was introduced after the fact and was unfamiliar to some of the userbase who had never used a language with it previously.I’m one of those old C++ programmers (using C++ since the 98 standard) who just loves auto. I use auto for everything. But i’ve had this argument too with other programmers. I actually think it’s not about fear of change but fear of “not knowing”. Some people fear “not knowing” what the type of a variable is and these are people who despise dynamic languages. Since auto makes the language FEEL dynamic, they don’t like it. It’s possible C++ has MORE of those people than in other languages.
I am hoping concepts will provide a bit of a middle ground here. Allow people to say “thing like variable” without having to be explicit about the type.
But in the end, it’s about fear of not knowing. Which is crazy because the use of auto is type safe, so nothing to fear there. Don’t be afraid! program fearlessly!
I think that’s valid, but there is another aspect to it, which is to do with being able to understand the code. Explicit type annotations can make it easier in some cases, for example if you are trying to see where a particular type is used. Even in languages like Haskell and Elm, it’s recommended to annotate top level functions, even though their types are inferred.
A type is not only for the computer, it’s also for the human reading the source. The more you leave to the computer to workout on its own, the more the human reading you will have to hunt for that now hidden information.
Why?
Yes, tooling can overcome the issue described above (will have to hunt for that now hidden information), but it is not explaining why it is preferrable. What is gained by doing so?
It depends. It doesn’t really matter if
myfunc
returns astd::vector
orstd::list
if all I’m doing is filtering the results. It does matter however that the type has.cbegin()
and.cend()
iterators.Because of what I wrote just afterwards: “I think that the possible operations on a type are what matters”.
Personally, I hardly ever care what type something is unless it’s terribly named. Otherwise it’s getting passed to another function/algorithm anyway. Even if I did know the type, I’d more likely than not have to jump to its definition to find out / remember what it is and what I can do with it.
As for what is gained: refactoring. With
auto
I don’t have to change all the variable declarations. There’s also the avoidance of implicit conversions.If the code-base is already good enough, that all implementations of an interface are interchangeable (no side-effect specific to a type that one would have to worry about), wouldn’t the refactoring already be possible mostly by tools, with minimal additional work to verify and touch-up possible artifacts?
You write that you wonder why this is a debate at all. But then, type inference is a “heavy” feature in a language and reduce code legibility. It has to demonstrate a clear advantage for such debate to be settled.
C++ is too bloated for me to use anyway, and this is something that I would suspect of most languages implementing type inference. There is still no clear advantage to having it (I mean, it won’t really come into consideration when choosing a language to work with).
Here’s a way to settle the debate: remove local type inference from a language that has it, and see how users react.
Of course, you’ll have to find a language designer willing to do such a thing, but then you’ll see if e.g. legibility is actually an issue: if people complain that adding local type inference reduces legibility (as you say), and just as many people complain that removing it also reduces legibility, then you may be able to make a claim as to whether “legibility” is a subjective property.
Even so, who wants large diffs for no reason?
Which seems to be the point of the people who prompted me to write the post in the first place. I understand that that’s your opinion, and it’s one I don’t agree with. What’s odd is that I’m not the only one - there are many languages where this debate doesn’t seem to happen.
It depends on your definition of bloated. Would C be bloated if all we did was add type inference to it?
There are a few cases where auto is not preferable.
Using
auto
for loop indicies when testing against asize_t
: all of the compilers I’ve used don’t try to match the signedness of the variable being tested against in the loop, generating sign mismatch warnings.In hot sections of code (performance-wise),
auto
tends to cloud my ability to see what the true cost of things are. I’m not sure if that’s a familiarity thing, or it’s just one abstraction too far when needing to know exact details.Similarly, I’ve been bitten by differences in what
auto
is, especially around references.I suspect there’s a steeper learning curve than originally anticipated here, but it is nice to have available for most code.