Overall, this is a well researched and detailed article, but the tone comes across as “this doesn’t monomorphism 100% and therefore Go generics are bad and slow” - which as a prevailing sentiment, represents a simply incomplete analysis. The Go team was obviously aware of the tradeoffs, and so it seems unfair in many ways.
One key thing not discussed in this article are generic containers where the element type was previously interface{} and no methods are ever called on that interface. In this extremely common use case, Go’s generics are likely to be as fast as manually monomorphized code.
Another key use case is for general code that need not be performance-critical, where reflection may have been used previously. In these cases, generics are likely to be strictly faster than reflection as well (potentially modulo some icache issues for megamorphic call sites).
Finally, this design allows for future compiler enhancements - including additional inlining of indirect/interface calls!
As an aside, if you were doing semi-automated monomorphization before with text templating, you now have a much richer and more robust for such a toolchain. That is, you can use Go’s syntax/parser and type-checker, then provide a code generator that spits out manually monomorphized Go code. If nobody has done this yet, I’m sure it will happen soon, as it’s quite straightforward.
I didn’t get that tone, especially with the conclusion of the article. The author encourages folks to use generics in certain cases, shows cases where they do get optimized well, and is hopeful for a future where they get either full monomorphization and/or for the optimization heuristics to get better.
To me this seemed like a very fair article, even if they did miss the case that you mentioned.
One key thing not discussed in this article are generic containers where the element type was previously interface{} and no methods are ever called on that interface. In this extremely common use case, Go’s generics are likely to be as fast as manually monomorphized code.
The article mentions that byteseq is fast. This is just a special case of that: the vtable indirection can’t slow you down if you never dispatch a method. :-)
That is, you can use Go’s syntax/parser and type-checker, then provide a code generator that spits out manually monomorphized Go code. If nobody has done this yet, I’m sure it will happen soon, as it’s quite straightforward.
I was looking into this last night. I think you can still use the “go2go” tool from the design prototyping of generics, but it’s no longer being maintained and will probably become subtly incompatible soon if it isn’t already.
It’s hard to take it seriously as anything other than the Go team continuing to hate generics and trying to do everything they can do to discourage people from using them.
The fact that there are people here talking about (afaict) continuing to use old Go code generators to support generic code without an absurd memory hit demonstrates that Go’s generics have not achieved the most basic of performance goals.
It’s hard to take it seriously as anything other than the Go team continuing to hate generics and trying to do everything they can do to discourage people from using them.
sigh It’s hard to take you seriously with this comment. You might have different opinions/preferences than the Go team, but to assume that they are trying to sabotage themselves is ridiculous.
Go’s generics have not achieved the most basic of performance goals
I’ve written and deployed at three major Go systems – one of which processes tens of petabytes of video per week – and I can count the number of times monomorphisation was necessary to achieve our performance goals on one hand. Generally, I copy/paste/tweak the < 100 lines of relevant code and move on with my work. Performance is not the only motivation for generics.
I’ve also written a fair bit of C++ in my life & have also had the experience where I had to disable monomorphization to avoid blowing the instruction cache. To say nothing of compile times.
You don’t like Go. That’s fine, but maybe don’t shit on the people who are working hard to create something useful for the people who do like it.
Also, this is…. clearly a compiler heuristic that can be tightened or loosened in future releases. They just chose “all pointers are the same” in order to ship quickly.
The Go team has stated that they do not like generics. They only added them now because everyone working with Go was continuously frustrated by, and complaining about, the lack of generics.
Given that background, I believe it is reasonable to assume that the Go team did not consider competitive generics to be a significant goal.
Worrying about compile time is something compiler developers should do, but given the option between faster build time for a few developers, vs faster run time for huge numbers of end users, the latter is the correct call.
I obviously can’t speak to what your projects are, but I would consider repeatedly copy/pasting essentially the same code to be a problem.
I don’t like Go, but I also didn’t complain about the language. I complained about an implementation of generics with questionable performance tradeoffs, from a group that has historically vigorously argued against generics.
You’re right in that I can’t point to a single quote.
I can however point at the last decade of Go devs talking about supporting generics, which has pretty consistently taken the path of “generics make the language harder” (despite them being present in a bunch of builtin types anyway?), “generics make the code bigger”, “generics make compilation slower”. Your above quote even says that they recognize that it’s developers outside the core Go team that see the benefit of generics.
This is not me beating up on Go, nor is it me questioning the competence of the core team. This is me saying that in light of their historical reticence to support generics, this comes across as being an intentionally minimal viable implementation created primarily to appease the hoard of devs that want the feature, rather than to provide a good level of performance.
in light of their historical reticence to support generics
Ian Lance Taylor, one of the largest and earliest Go contributors, was advocating or at least suggesting generics for quite some time. He was exploring design ideas since at least 2010, with a more serious design in 2013. I think this contradicts the “giving in and appeasing the masses” sentiment you’re projecting on the Go team.
Come to think of it, he also wrote a very long rebuttal to a commenter on the golang-nuts mailing list who was essentially saying what you’re saying. I’ll see if I can find it. Edit: Here it is.
this comes across as being an intentionally minimal viable implementation created primarily to appease the hoard of devs that want the feature, rather than to provide a good level of performance.
That sounds like a very unconvincing argument to me. In my opinion they did a very good job with the accepted generics proposal, because it keeps the language simple while also helping to avoid a lot of boilerplate, especially in libraries. Also, the Go team has pointed out several points of the generics implementation they want to improve on in upcoming releases. Why should they do that if they just had implemented generics only to please “ the hoard of devs that want the feature”?
I will say that compared to early generics proposals, the final design is quite a bit more Go-like. It’s unfortunate that the type constraints between the [] can get quite long, but if you ignore the type parameters, the functions look just like normal Go functions.
The Go team has stated that they do not like generics.
I don’t think that is true at all. They stated (1) that they did not like either end of the tradeoffs with erasure (characterized by Java) causing slow programs and full specialization (characterized by C++ templates) causing slow compile times. And (2) that some designs are needlessly complex.
They spent years refining the design - even collaborating with Academia - to minimize added type system complexity and choose a balanced performance/compile-time implementation tradeoff that met their objectives.
given the option between faster build time for a few developers, vs faster run time for huge numbers of end users, the latter is the correct call.
I couldn’t possibly disagree more.
I would consider repeatedly copy/pasting essentially the same code to be a problem.
I can’t think of any lowery severity problem affecting any of my projects.
Anyway, I won’t reply to any further messages in this thread.
They only added them now because everyone working with Go was continuously frustrated by, and complaining about, the lack of generics.
I would phrase it that everyone not working with Go was complaining about the lack of generics, and the Google marketing team assigned to Go (Steve Francia and Carmen Ando being the most prominent) are working hard to sell Go to the enterprise, so it was priority to clear that bullet point up.
People working with Go generally just bite the bullet and use code generation if necessary, but mostly just ignored the lack of generics.
Self-sabotage just for the sake of generics doesn’t make sense because this release slowed down build times for everyone not just people using generics: https://github.com/golang/go/issues/49569.
The tldr seems to be that Go generics treat all pointer typed type parameters as being the same, so fails to monomorphise, resulting in needlessly slow code.
Slightly more TL;DR: it monomorphizes by “shape” rather than type, so it has to pass the vtable as a hidden parameter to the monomorphized functions, and the vtable parameter makes inlining hard for the Go compiler and results in extra pointer dereferences.
It does have some good news though: using a generic byteseq is just as fast as using raw string and []byte!
Indeed, and one that the Go team worked very hard to calibrate to the right level for their context. Not every language should make the same choice Rust and C++ make of fully monomorphizing generics (in both cases with optional mechanisms [virtual in C++ and trait objects in Rust] to escape).
There are many ways to reduce code generated, but there are very few cases where people choose absurdly slow code over code size. Honestly this still just feels like the ongoing saga of the Go team hating generic programming and passive aggressively making design choices for the express purpose of penalizing such code.
Honestly this still just feels like the ongoing saga of the Go team hating generic programming and passive aggressively making design choices for the express purpose of penalizing such code.
What I do know is that the core Go team has spent years arguing against generics, and now that they’ve finally added support the implementation’s performance characteristics seem significantly worse than pretty much every other implementation of generics outside of Java.
You already admitted that you have no evidence to claim that ‘the core Go team has spent years arguing against generics’. You have no credibility to argue about Go.
I’ve been working on a Go->C++ transpiler – https://github.com/nikki93/gx – that also converts Go generics to C++ templates. The code is quite small because it uses Go’s standard library parser and typechecker. Should write a readme and post about it here some time. Currently it’s super just scoped to my own personal projects and use and is quite a specific subset of Go w/o GC etc. For the generics part the nice thing is you get Go’s definition-checking and simplicity but also C++’s monomorphization.
Overall, this is a well researched and detailed article, but the tone comes across as “this doesn’t monomorphism 100% and therefore Go generics are bad and slow” - which as a prevailing sentiment, represents a simply incomplete analysis. The Go team was obviously aware of the tradeoffs, and so it seems unfair in many ways.
One key thing not discussed in this article are generic containers where the element type was previously
interface{}
and no methods are ever called on that interface. In this extremely common use case, Go’s generics are likely to be as fast as manually monomorphized code.Another key use case is for general code that need not be performance-critical, where reflection may have been used previously. In these cases, generics are likely to be strictly faster than reflection as well (potentially modulo some icache issues for megamorphic call sites).
Finally, this design allows for future compiler enhancements - including additional inlining of indirect/interface calls!
As an aside, if you were doing semi-automated monomorphization before with text templating, you now have a much richer and more robust for such a toolchain. That is, you can use Go’s syntax/parser and type-checker, then provide a code generator that spits out manually monomorphized Go code. If nobody has done this yet, I’m sure it will happen soon, as it’s quite straightforward.
I didn’t get that tone, especially with the conclusion of the article. The author encourages folks to use generics in certain cases, shows cases where they do get optimized well, and is hopeful for a future where they get either full monomorphization and/or for the optimization heuristics to get better.
To me this seemed like a very fair article, even if they did miss the case that you mentioned.
The article mentions that byteseq is fast. This is just a special case of that: the vtable indirection can’t slow you down if you never dispatch a method. :-)
I was looking into this last night. I think you can still use the “go2go” tool from the design prototyping of generics, but it’s no longer being maintained and will probably become subtly incompatible soon if it isn’t already.
It’s hard to take it seriously as anything other than the Go team continuing to hate generics and trying to do everything they can do to discourage people from using them.
The fact that there are people here talking about (afaict) continuing to use old Go code generators to support generic code without an absurd memory hit demonstrates that Go’s generics have not achieved the most basic of performance goals.
sigh It’s hard to take you seriously with this comment. You might have different opinions/preferences than the Go team, but to assume that they are trying to sabotage themselves is ridiculous.
I’ve written and deployed at three major Go systems – one of which processes tens of petabytes of video per week – and I can count the number of times monomorphisation was necessary to achieve our performance goals on one hand. Generally, I copy/paste/tweak the < 100 lines of relevant code and move on with my work. Performance is not the only motivation for generics.
I’ve also written a fair bit of C++ in my life & have also had the experience where I had to disable monomorphization to avoid blowing the instruction cache. To say nothing of compile times.
You don’t like Go. That’s fine, but maybe don’t shit on the people who are working hard to create something useful for the people who do like it.
That summarizes my Go experience in the last decade. I miss this in basically every other language now.
Also the generics turned out very nice imho, I’m impressed with the balance they managed to strike in the design.
Also, this is…. clearly a compiler heuristic that can be tightened or loosened in future releases. They just chose “all pointers are the same” in order to ship quickly.
OTOH, no one can say they shipped generics “quickly“. Even Java did it quicker, though not better.
It only took 11 years after this was posted: https://research.swtch.com/generic :-)
The Go team has stated that they do not like generics. They only added them now because everyone working with Go was continuously frustrated by, and complaining about, the lack of generics.
Given that background, I believe it is reasonable to assume that the Go team did not consider competitive generics to be a significant goal.
Worrying about compile time is something compiler developers should do, but given the option between faster build time for a few developers, vs faster run time for huge numbers of end users, the latter is the correct call.
I obviously can’t speak to what your projects are, but I would consider repeatedly copy/pasting essentially the same code to be a problem.
I don’t like Go, but I also didn’t complain about the language. I complained about an implementation of generics with questionable performance tradeoffs, from a group that has historically vigorously argued against generics.
Do you have a source for that claim? All I remember is a very early statement, that also has been on the website.
Looked it up. This was in since at least 2010[1].
[1] https://web.archive.org/web/20101123010932/http://golang.org/doc/go_faq.html
You’re right in that I can’t point to a single quote.
I can however point at the last decade of Go devs talking about supporting generics, which has pretty consistently taken the path of “generics make the language harder” (despite them being present in a bunch of builtin types anyway?), “generics make the code bigger”, “generics make compilation slower”. Your above quote even says that they recognize that it’s developers outside the core Go team that see the benefit of generics.
This is not me beating up on Go, nor is it me questioning the competence of the core team. This is me saying that in light of their historical reticence to support generics, this comes across as being an intentionally minimal viable implementation created primarily to appease the hoard of devs that want the feature, rather than to provide a good level of performance.
Ian Lance Taylor, one of the largest and earliest Go contributors, was advocating or at least suggesting generics for quite some time. He was exploring design ideas since at least 2010, with a more serious design in 2013. I think this contradicts the “giving in and appeasing the masses” sentiment you’re projecting on the Go team.
Come to think of it, he also wrote a very long rebuttal to a commenter on the golang-nuts mailing list who was essentially saying what you’re saying. I’ll see if I can find it. Edit: Here it is.
That sounds like a very unconvincing argument to me. In my opinion they did a very good job with the accepted generics proposal, because it keeps the language simple while also helping to avoid a lot of boilerplate, especially in libraries. Also, the Go team has pointed out several points of the generics implementation they want to improve on in upcoming releases. Why should they do that if they just had implemented generics only to please “ the hoard of devs that want the feature”?
I will say that compared to early generics proposals, the final design is quite a bit more Go-like. It’s unfortunate that the type constraints between the [] can get quite long, but if you ignore the type parameters, the functions look just like normal Go functions.
I don’t think that is true at all. They stated (1) that they did not like either end of the tradeoffs with erasure (characterized by Java) causing slow programs and full specialization (characterized by C++ templates) causing slow compile times. And (2) that some designs are needlessly complex.
They spent years refining the design - even collaborating with Academia - to minimize added type system complexity and choose a balanced performance/compile-time implementation tradeoff that met their objectives.
I couldn’t possibly disagree more.
I can’t think of any lowery severity problem affecting any of my projects.
Anyway, I won’t reply to any further messages in this thread.
I would phrase it that everyone not working with Go was complaining about the lack of generics, and the Google marketing team assigned to Go (Steve Francia and Carmen Ando being the most prominent) are working hard to sell Go to the enterprise, so it was priority to clear that bullet point up.
People working with Go generally just bite the bullet and use code generation if necessary, but mostly just ignored the lack of generics.
Self-sabotage just for the sake of generics doesn’t make sense because this release slowed down build times for everyone not just people using generics: https://github.com/golang/go/issues/49569.
The tldr seems to be that Go generics treat all pointer typed type parameters as being the same, so fails to monomorphise, resulting in needlessly slow code.
Slightly more TL;DR: it monomorphizes by “shape” rather than type, so it has to pass the vtable as a hidden parameter to the monomorphized functions, and the vtable parameter makes inlining hard for the Go compiler and results in extra pointer dereferences.
It does have some good news though: using a generic byteseq is just as fast as using raw string and []byte!
But less code. It’s a tradeoff, not a mistake.
Indeed, and one that the Go team worked very hard to calibrate to the right level for their context. Not every language should make the same choice Rust and C++ make of fully monomorphizing generics (in both cases with optional mechanisms [
virtual
in C++ and trait objects in Rust] to escape).+1
Canonical post on the topic: https://research.swtch.com/generic
There are many ways to reduce code generated, but there are very few cases where people choose absurdly slow code over code size. Honestly this still just feels like the ongoing saga of the Go team hating generic programming and passive aggressively making design choices for the express purpose of penalizing such code.
What if you’re wrong about their motives?
I said “feels like”
What I do know is that the core Go team has spent years arguing against generics, and now that they’ve finally added support the implementation’s performance characteristics seem significantly worse than pretty much every other implementation of generics outside of Java.
You already admitted that you have no evidence to claim that ‘the core Go team has spent years arguing against generics’. You have no credibility to argue about Go.
I’ve been working on a Go->C++ transpiler – https://github.com/nikki93/gx – that also converts Go generics to C++ templates. The code is quite small because it uses Go’s standard library parser and typechecker. Should write a readme and post about it here some time. Currently it’s super just scoped to my own personal projects and use and is quite a specific subset of Go w/o GC etc. For the generics part the nice thing is you get Go’s definition-checking and simplicity but also C++’s monomorphization.