I tried to use the Stringer example from the design document, but it doesn’t compile. Anyone know what I’m doing wrong here? Maybe the type checker can’t handle this use case yet?
Your String method on Thing1 is on a pointer type, not a struct type. If you change the method to be on the struct type, it works (https://go2goplay.golang.org/p/OjhU9SkjDyn). Conversely, you could use your example and call it on the pointer to the struct (https://go2goplay.golang.org/p/pH1cMomGY9t). The other option is to declare the object as a pointer (but keep the method on the struct) and then x.String() will call the non-pointer receiver method when called (https://go2goplay.golang.org/p/_rpA79Ggdrn).
This has to do with value vs pointer receivers. If you change the x to &Thing1{...} it works. This is actually the same with regular methods/interfaces in Go already – when you convert the value to an interface it has to explicitly implement the exact (pointer- or value-receiver) method.
Can anyone recommend some material describing concrete motivations for adding generics to Go? I’m aware of the abstract idea that you need generics in order to build data structures that work with different types, but is there a real-world setting where this is actually better than just copying code and doing s/type1/type2/g? My sense is that projects that use complex data structures almost always customize the data structures in a way that depends on the data type being stored.
I hope it’s not that hard to imagine wanting different data structures than hash maps; maybe your problem fits better into a binary search tree for example.
Well, I for one usually don’t feel like implementing my own red-black tree, so I would like to just grab a library. That library will be much nicer to use if I can just make an RBTree<string, MyFoo>. I certainly wouldn’t want to copy/paste an int->string red-black tree into some file(s) and judiciously search/replace until I have a string->MyFoo tree (and then do the same when I need an int->float tree).
That makes sense, but I am still looking for some grounding in actual programming practice. Is there a use of a red-black tree that would not warrant customizing it for the storage type? Or one where it would make sense to add a library dependency rather than copying the RB tree code?
How do you write a library that provides a Red-Black tree that can in principle work with many different client types without generics? This isn’t a rhetorical question, I don’t know Go and I genuinely don’t know how you would implement this kind of library in Go without generics.
Right, that’s a great example. Because it uses interface{}, it:
Requires all keys and values to be heap allocated, leading to worse performance, worse memory usage, and worse memory fragmentation. Requiring two heap-allocated ints to store one value in an int->int concurrent hash map is unacceptable for many uses.
Is less ergonomic, requiring a cast every time you want to use a value.
Provides no type safety. (I imagine this one will be the least convincing to Go programmers, since Go generally expects the programmer to just not make mistakes)
This brings me back to C++Builder 3 back in the 90s. To use a list, you had to create a class derived from a kind of TItem class to be able to store things. Why anyone would want to go back to that in productive code boggles my mind.
I’m using a sync.Map (for its concurrency support - I have many goroutines writing map entries, and another goroutine periodically ranging over the entire map to dump it to a json file).
However I know the types I write to the map, I have no need for interface{}.
Am I better off with a real typed map + using sync.RWLock/mutex/etc. directly (in a custom struct)? Performance-wise.
I don’t know, you would have to benchmark or measure CPU or memory usage. The sync.Map documentation suggests that using a regular map + mutexes could be better though: https://golang.org/pkg/sync/#Map
The Map type is specialized. Most code should use a plain Go map instead, with separate locking or coordination, for better type safety and to make it easier to maintain other invariants along with the map content.
The Map type is optimized for two common use cases: (1) when the entry for a given key is only ever written once but read many times, as in caches that only grow, or (2) when multiple goroutines read, write, and overwrite entries for disjoint sets of keys. In these two cases, use of a Map may significantly reduce lock contention compared to a Go map paired with a separate Mutex or RWMutex.
If your usage falls outside of the two use cases which sync.Map is optimized for, it would absolutely be worth looking into replacing your sync.Map with a regular map and a mutex.
I suppose it becomes a question of which has the biggest performance penalty for you, heap allocation + indirection with sync.Map or lock contention with regular map + mutex?
(Also, in most cases, this probably doesn’t matter; make sure you’re not spending a long time improving performance in a part of your code which isn’t actually a performance issue :p)
Right - the code “just works(TM)” and it takes around 0.5 seconds to render the JSON file every minute (which I track with metrics just to be safe) so it should be fine to keep as is. This is just a for-fun conversation.
or (2) when multiple goroutines read, write, and overwrite entries for disjoint sets of keys. In these two cases, use of a Map may significantly reduce lock contention compared to a Go map paired with a separate Mutex or RWMutex.
I definitely remember reading this sentence and it made me choose sync.Map because it sounds like my usecase. But like you say if I don’t measure it’ll be hard to tell.
Here is a popular Go RB tree implementation https://github.com/emirpasic/gods/ that uses Interface{} for the key and value types. Just search github for uses of it… With generics, users of this library would get greater typesafety.
At a previous job, I worked on a tool that started various services. The services had different dependencies, each of which needed to be started before the service. We wanted to be able to bring them up with as much parallelism as possible, or have a flag to launch them serially.
A simple approach to doing this correctly is modeling the dependencies as an acyclic graph (if it’s a cyclic graph, you’ve got a problem — you can never bring the services up, because they all depend on each other). To launch them in parallel, launch each one that has its dependencies met. To launch them serially, topologically sort the graph into an array/list/whatever and launch them one by one.
A generic graph implementation would be very useful, as would a topological sort that worked on generic graphs. With Go, you can’t have one that’s type-safe.
Another great use case for graphs: visualizing dependency graphs! You can have an open source graph visualization library, build a graph of whatever it is you’re trying to visualize, and pass it to the library and get a nice visualization of the data.
Graph data structures can be quite useful. Supporting generics makes them type-safe, so you catch errors at compile time instead of runtime. Some other examples of the usefulness of graphs:
Graphs of friends at a social network (I currently work at one, and we use generic graph data structures all over the place — graphs of people to people, graphs connecting people and photos they’re tagged in, etc)
Network topology graphs
Graphs of links between documents
etc.
And it’s not just graphs. How do you write a type-safe function that takes in a list of possibly-null items, and returns a new list with the nulls stripped out? How about a function that takes a map and returns the list of its keys? In Golang, the answer is always copy-paste or give up type safety. In languages with generics, you can trivially write these functions yourself if they’re not in the standard library.
To turn the question around, why would you want to manually copy/paste code all over the place when the compiler can do it for you? And while I personally think “DRY” can be over done, not having the same (or very similar) code copy/pasted all over the place seems like a big practical win.
As far as customizing specific data structure and type combinations, most languages with generics have a way to do that, and I’d bet the Go designers thought of it.
Copy / paste has got it’s own problems, but it lets you avoid a ton of complexity in the toolchain.
Toolchain development is all about tradeoffs. For instance, I use Typescript; the reference implementation is featureful, but slow to boot, so it keeps a background process alive to cache the heavy lifting, which accumulates state and introduces subtle confusions (eg type errors that don’t exist) until it’s restarted.
For some problem spaces, the problems introduced by copy/paste pale in comparison to the problems introduced by slow, stateful compilers.
Copy/paste vs generics is unrelated to compiler bugginess.
If you carefully pick TypeScript as the comparison point, you can make the case that a buggy toolchain is bad (not that most users care, they just restart the compile process when it starts to go bad).
But if you were to pick say ReasonML for comparison, you could say that it’s possible to have a solid generics implementation (much less copy-pasting) and a super-fast, accurate compiler.
I.e. you can have both buggy and non-buggy compilers supporting generics. Hence, unrelated.
That said, while the relationship is indirect, it’s there. Adding complexity is never free. It didn’t cost ReasonML speed or reliability, but it costs maintainers time and makes every other feature more difficult to add in an orthogonal way.
In the scheme of things, is it more important to have a super-simple compiler codebase, or is it more important to put more power and expressiveness in the hands of users? Note that every mainstream language that started without generics, has now added it.
IMO, there’s such a thing as a right time to do it.
In the early years it’s more important to keep the simplicity - there aren’t that many users and you’re still figuring out what you want the language to be (not every feature is compatible with every approach to generics).
Once you’re ready to start generics you need to answer questions like - do you want monomorphisation or lookup tables? Is boxing an acceptable overhead for the improved debugging ergonomics?
I think these comparisons are a bit unfair: isn’t Typescript self hosted, whereas ReasonML is written in OCaml? It seems like Typescript would have a very hard time competing.
Being able to use lots of existing OCaml bits is a huge advantage.
Typescript has been able to compete due to the sheer number of contributors - MS pays quite a large team to work on it (and related stuff like the excellent Language Server Protocol, VScode integration).
However, large teams tend to produce more complex software (IMO due to the added communications overhead - it becomes easier to add a new thing than find out what existing thing solves the same problem).
I should clarify my comment was more about comparing performance of the two languages.
OCaml is a well optimized language that targets native machine code so tooling built in OCaml should be more performant than tooling built in Typescript. As a result, it’s hard to compare the complexity of either tool by the performance of the tool. It’s apples and oranges.
isn’t Typescript self hosted, whereas ReasonML is written in OCaml? It seems like Typescript would have a very hard time competing.
That’s a strange argument. If it were very hard for them to compete why would they not use OCaml as well, especially since its contemporary alternative, Flow, was written in OCaml too? Or why would they not make TypeScript as good as a language for writing TypeScript in as OCaml is?
My comment was more about performance, but it wasn’t very clear. It’s hard for Typescript, which is compiled to Javascript and then interpreted/JITed, to create tooling that’s as fast as a language that builds optimized native code.
Given that Typescript is self hosted it has the advantage that community involvement is more seamless and I don’t want to downplay the power that brings.
I think Typescript is a straw man. Does this Go implementation of generics slow down the compiler a noticeable amount? There’s nothing inherent to generics that would make compiling them slow.
On the other hand, copy/pasted code is an ever increasing burden on developer and compile time.
You are imagining a code base where the same complex data structure is instantiated with two different types. Is that realistic?
Realistic enough that the Linux kernel developers went through the hassle of developing generic associative arrays, circular buffers, and other generic data structures using void*.
And so did Gnome with GLib, which provides generic lists, hash tables, and trees, along with several others structures, also using void*.
And the standard libraries of most modern languages include reusable and generic sequence and associative data types, and some times significantly more than that.
For most data structures, though, focusing on a single code base gives too narrow of a view. Generics allow libraries of data structures to be created, so even though a single code base only use one R* tree (or whatever), that R* tree library can be used as-is by any number of projects.
The Abstract and Background sections of the draft design doc touch on the motivations. Additionally, each section describing a dimension of the design usually mentions, at least briefly, the motivation for that feature.
“Can’t you just …” goes forever. “Can’t you just write your for loop with labels and jumps in assembly?”^^
For me, it’s all about abstraction. Having low level combinators, like this, that I can compose to build higher level abstractions in a generic way is wonderful.
I’m not sure that composing from higher level abstractions is always such a good idea. I like both Go (hobby projects) and Rust (work!) but I still fell that most of the time I prefer this level of abstraction:
type Server struct {
...
Handler Handler // handler to invoke, http.DefaultServeMux if nil
...
}
type Handler interface {
ServeHTTP(ResponseWriter, *Request)
}
Don’t get me wrong, i like type level guarantees and I can see flexibility here, but in my experience with c++, rust and haskell is that generic programming often ends up complicating things to a degree that I personally don’t like.
I think this is going to be a balance that the community has to find. I don’t regularly program in rust, but I’d be quite surprised if it wasn’t possible to get something close to the Go http API in it. The example you pasted seems complicated for the sake of being complicated. In theory, the Go community has been drilled into thinking in terms of the littlest abstraction that’ll work, which maybe makes it possible to generally avoid generic APIs that don’t actually need to be?
“Can’t you just” does not go forever. It is a simpler way to say that the alternative is not significantly harder than what’s proposed. Is there some type of task that would be doable using a generic map but unreasonably hard using for loops?
I feel like Go was designed from the ground up to be written in an imperative style, and composing first order functions is more of a functional style of coding. If I understand, without generics you would be nesting for loops rather than composing map functions, which is no more difficult to understand or write.
I think it’s fine for your style of writing code to be to use loops and conditionals instead of map and filter. I think it’s a fine way to code that makes more sense in an imperative language. Straight for loops and while loops with if statements inside them is just better, more easily understandable code in an imperative language, in my opinion, than .map(...).filter(...).map(...) etc.
I mean… that’s like … one man’s opinion… man. See also.
Generics are going to create a divide in the Go community, and it’s going to be popcorn worthy. There’s no point of adding Generics to the language if this filter thing “shouldn’t be used,” and the community rejects the abstractions that Generics provide.
This divide is easily already seen in the community as it relates to test helpers. On the one hand, there’s a set of developers that say “stdlib testing is more than enough.” On the other hand, there are people who want the full testing facilities of junit, with matchers, lots of assert style helpers, etc. Who is right? They all are, because those things work for their respective teams and projects.
This general dogmatic approach to language idioms is why I call it “idiotmatic” Go.
I suppose if Ken and Rob wanted generics they would’ve put them in the original language, and there wouldn’t be this controversy. Time to go back to learning Erlang which seems old and dusty enough to not have big language changes and drama.
You can’t pass a for loop to anything, you can only write it where you need it. Sure, toy examples look like toy examples, but the fact remains that Go has first-class functions, which should be a nice thing, but it doesn’t actually have a type system rich enough to express 90% of the things that make first-class functions worth having.
Consider composing multiple transformations and filters together. With multiple for loops you have to iterate over the array each time, while by composing maps you only need to iterate once.
Generics isn’t enough to do what people want to do with error handling 99.99% of the time, which is to return early. For that, you either need a macro, such as the aborted try proposal, or syntactic sugar for chaining such “functions that compose over them” (like Haskell’s do notation).
Otherwise you end up with callback hell à la JavaScript, and I think nobody wants that in Go.
I was more thinking of something where the if err pattern is enforced via the type system. You’re still not getting there 100%, you could get reasonably close, with a generic Result type that panics when the wrong thing is accessed, forcing you to check always or risk a panic.
r := thing()
if r.HasError() { handleError(r.Err()) }
else v := r.Val() { handleSuccess(v) }
And of course it’s easy to question why this is interesting until you do chaining of things, and get a full on, type safe Result monad.
r := thing().andThen(func(i int) { ... }).andThen(func(i int) { ... })
if r.IsErr() {
handleErrForWholeComputation(r.Err())
} else {
handleSuccessForWholeComputation(r.Val())
}
The alternative can be seen in things like this where you skirt around the fact that you can’t generically accept a value in one of those called functions. This is also why I said people are going to get so mad. These things are confusing to people who haven’t dealt with them before, and will make Go much more expressive, but less easy to grok without effort.
but is there a real-world setting where this is actually better than just copying code and doing s/type1/type2/g
All of them. Copying code manually is one of the worst things you can do in software development. If it weren’t, why even bother writing functions, ever?
My sense is that projects that use complex data structures almost always customize the data structures in a way that depends on the data type being stored.
The fact that libraries exist that don’t customise in such a way in languages with generics would disprove that notion.
but is there a real-world setting where this is actually better than just copying code and doing s/type1/type2/g
All of them. Copying code manually is one of the worst things you can do in software development. If it weren’t, why even bother writing functions, ever?
I disagree with your implication that the use of functions means code should never be copied. For example if you want to use strlcpy() in a portable C program, it makes more sense to put a copy in your source tree rather than relying on an external library. An extra dependency would add more headache than just copying the code.
My sense is that projects that use complex data structures almost always customize the data structures in a way that depends on the data type being stored.
The fact that libraries exist that don’t customise in such a way in languages with generics would disprove that notion.
That’s why I said “almost always.” And remember that the existence of a library doesn’t mean it is used with any frequency.
Suppose you have a structure parametrizable by types T1, T2. You’re writing it in Go, so you assume that it’s ok if T1=string, T2=int. Also, in some of the places, you were using int for purpose unrelated to T2 (ie. if T2=foo, then there are still ints left in the source). Another programmer wants to copy-paste the code and change some types. How does he do it?
I think “this would make copy-pasting code harder” is not so compelling an argument. One of the major points of introducing generics is that it would eliminate much of the present need for copy/paste in Go.
TL;DR example: https://go2goplay.golang.org/p/4X5ZsEY-IbT
source: https://twitter.com/andybons/status/1272990192337649666?s=20
Oh, boy. Feels like folks are gonna get awfully creative with this… Hopefully code stays readable.
So, the Expression Problem thing from Featherweight Go is out? It seemed a nice addition to me, easy to grok and useful.
It’s the ability to require additional constraints in some methods of a generic type, such as:
It seems every time I look at go code, it becomes hard for my brain to figure out what part of a function declaration is doing what.
That’s just lack of familiarity. 2-3 days of looking at or writing it will fix this.
Agreed. Looks to be substantially reducing the usefulness of generics and will probably force adding “optional” constraints to the class instead.
I tried to use the
Stringer
example from the design document, but it doesn’t compile. Anyone know what I’m doing wrong here? Maybe the type checker can’t handle this use case yet?Your
String
method on Thing1 is on a pointer type, not a struct type. If you change the method to be on the struct type, it works (https://go2goplay.golang.org/p/OjhU9SkjDyn). Conversely, you could use your example and call it on the pointer to the struct (https://go2goplay.golang.org/p/pH1cMomGY9t). The other option is to declare the object as a pointer (but keep the method on the struct) and then x.String() will call the non-pointer receiver method when called (https://go2goplay.golang.org/p/_rpA79Ggdrn).[Comment removed by author]
This has to do with value vs pointer receivers. If you change the
x
to&Thing1{...}
it works. This is actually the same with regular methods/interfaces in Go already – when you convert the value to an interface it has to explicitly implement the exact (pointer- or value-receiver) method.Can anyone recommend some material describing concrete motivations for adding generics to Go? I’m aware of the abstract idea that you need generics in order to build data structures that work with different types, but is there a real-world setting where this is actually better than just copying code and doing s/type1/type2/g? My sense is that projects that use complex data structures almost always customize the data structures in a way that depends on the data type being stored.
I hope it’s not that hard to imagine wanting different data structures than hash maps; maybe your problem fits better into a binary search tree for example.
Well, I for one usually don’t feel like implementing my own red-black tree, so I would like to just grab a library. That library will be much nicer to use if I can just make an
RBTree<string, MyFoo>
. I certainly wouldn’t want to copy/paste an int->string red-black tree into some file(s) and judiciously search/replace until I have a string->MyFoo tree (and then do the same when I need an int->float tree).That makes sense, but I am still looking for some grounding in actual programming practice. Is there a use of a red-black tree that would not warrant customizing it for the storage type? Or one where it would make sense to add a library dependency rather than copying the RB tree code?
How do you write a library that provides a Red-Black tree that can in principle work with many different client types without generics? This isn’t a rhetorical question, I don’t know Go and I genuinely don’t know how you would implement this kind of library in Go without generics.
Go’s sync.Map (concurrent hashmap) is an actual real world example of this, and it uses
interface{}
, akin to Java’s Object.Right, that’s a great example. Because it uses interface{}, it:
This brings me back to C++Builder 3 back in the 90s. To use a list, you had to create a class derived from a kind of TItem class to be able to store things. Why anyone would want to go back to that in productive code boggles my mind.
I’m using a sync.Map (for its concurrency support - I have many goroutines writing map entries, and another goroutine periodically ranging over the entire map to dump it to a json file).
However I know the types I write to the map, I have no need for
interface{}
.Am I better off with a real typed map + using sync.RWLock/mutex/etc. directly (in a custom struct)? Performance-wise.
I don’t know, you would have to benchmark or measure CPU or memory usage. The sync.Map documentation suggests that using a regular map + mutexes could be better though: https://golang.org/pkg/sync/#Map
If your usage falls outside of the two use cases which sync.Map is optimized for, it would absolutely be worth looking into replacing your sync.Map with a regular map and a mutex.
I suppose it becomes a question of which has the biggest performance penalty for you, heap allocation + indirection with sync.Map or lock contention with regular map + mutex?
(Also, in most cases, this probably doesn’t matter; make sure you’re not spending a long time improving performance in a part of your code which isn’t actually a performance issue :p)
Right - the code “just works(TM)” and it takes around 0.5 seconds to render the JSON file every minute (which I track with metrics just to be safe) so it should be fine to keep as is. This is just a for-fun conversation.
I definitely remember reading this sentence and it made me choose sync.Map because it sounds like my usecase. But like you say if I don’t measure it’ll be hard to tell.
I don’t know and I didn’t think you could. I’m asking for an example use of an RB tree where using a library would make sense.
Here is a popular Go RB tree implementation https://github.com/emirpasic/gods/ that uses Interface{} for the key and value types. Just search github for uses of it… With generics, users of this library would get greater typesafety.
https://github.com/search?q=%22github.com%2Femirpasic%2Fgods%2Ftrees%2Fredblacktree%22&type=Code
okay. except i don’t know how to search github for uses of it and your search link brings me to a login page :(
To short-circuit this:
At a previous job, I worked on a tool that started various services. The services had different dependencies, each of which needed to be started before the service. We wanted to be able to bring them up with as much parallelism as possible, or have a flag to launch them serially.
A simple approach to doing this correctly is modeling the dependencies as an acyclic graph (if it’s a cyclic graph, you’ve got a problem — you can never bring the services up, because they all depend on each other). To launch them in parallel, launch each one that has its dependencies met. To launch them serially, topologically sort the graph into an array/list/whatever and launch them one by one.
A generic graph implementation would be very useful, as would a topological sort that worked on generic graphs. With Go, you can’t have one that’s type-safe.
Another great use case for graphs: visualizing dependency graphs! You can have an open source graph visualization library, build a graph of whatever it is you’re trying to visualize, and pass it to the library and get a nice visualization of the data.
Graph data structures can be quite useful. Supporting generics makes them type-safe, so you catch errors at compile time instead of runtime. Some other examples of the usefulness of graphs:
etc.
And it’s not just graphs. How do you write a type-safe function that takes in a list of possibly-null items, and returns a new list with the nulls stripped out? How about a function that takes a map and returns the list of its keys? In Golang, the answer is always copy-paste or give up type safety. In languages with generics, you can trivially write these functions yourself if they’re not in the standard library.
thanks, this is a good motivating example.
Try this link:
https://sourcegraph.com/search?q=%22github.com/emirpasic/gods/trees/redblacktree%22&patternType=literal
Huh. It had not occurred to me that github search would require a login.
To turn the question around, why would you want to manually copy/paste code all over the place when the compiler can do it for you? And while I personally think “DRY” can be over done, not having the same (or very similar) code copy/pasted all over the place seems like a big practical win.
As far as customizing specific data structure and type combinations, most languages with generics have a way to do that, and I’d bet the Go designers thought of it.
Copy / paste has got it’s own problems, but it lets you avoid a ton of complexity in the toolchain.
Toolchain development is all about tradeoffs. For instance, I use Typescript; the reference implementation is featureful, but slow to boot, so it keeps a background process alive to cache the heavy lifting, which accumulates state and introduces subtle confusions (eg type errors that don’t exist) until it’s restarted.
For some problem spaces, the problems introduced by copy/paste pale in comparison to the problems introduced by slow, stateful compilers.
Copy/paste vs generics is unrelated to compiler bugginess.
If you carefully pick TypeScript as the comparison point, you can make the case that a buggy toolchain is bad (not that most users care, they just restart the compile process when it starts to go bad).
But if you were to pick say ReasonML for comparison, you could say that it’s possible to have a solid generics implementation (much less copy-pasting) and a super-fast, accurate compiler.
I.e. you can have both buggy and non-buggy compilers supporting generics. Hence, unrelated.
ReasonML is great!
That said, while the relationship is indirect, it’s there. Adding complexity is never free. It didn’t cost ReasonML speed or reliability, but it costs maintainers time and makes every other feature more difficult to add in an orthogonal way.
In the scheme of things, is it more important to have a super-simple compiler codebase, or is it more important to put more power and expressiveness in the hands of users? Note that every mainstream language that started without generics, has now added it.
IMO, there’s such a thing as a right time to do it.
In the early years it’s more important to keep the simplicity - there aren’t that many users and you’re still figuring out what you want the language to be (not every feature is compatible with every approach to generics).
Once you’re ready to start generics you need to answer questions like - do you want monomorphisation or lookup tables? Is boxing an acceptable overhead for the improved debugging ergonomics?
It seems like Go has been going through exactly the process you’re describing.
I think these comparisons are a bit unfair: isn’t Typescript self hosted, whereas ReasonML is written in OCaml? It seems like Typescript would have a very hard time competing.
Being able to use lots of existing OCaml bits is a huge advantage.
Typescript has been able to compete due to the sheer number of contributors - MS pays quite a large team to work on it (and related stuff like the excellent Language Server Protocol, VScode integration).
However, large teams tend to produce more complex software (IMO due to the added communications overhead - it becomes easier to add a new thing than find out what existing thing solves the same problem).
I should clarify my comment was more about comparing performance of the two languages.
OCaml is a well optimized language that targets native machine code so tooling built in OCaml should be more performant than tooling built in Typescript. As a result, it’s hard to compare the complexity of either tool by the performance of the tool. It’s apples and oranges.
That’s a strange argument. If it were very hard for them to compete why would they not use OCaml as well, especially since its contemporary alternative, Flow, was written in OCaml too? Or why would they not make TypeScript as good as a language for writing TypeScript in as OCaml is?
My comment was more about performance, but it wasn’t very clear. It’s hard for Typescript, which is compiled to Javascript and then interpreted/JITed, to create tooling that’s as fast as a language that builds optimized native code.
Given that Typescript is self hosted it has the advantage that community involvement is more seamless and I don’t want to downplay the power that brings.
But compilers that support generics are more likely to be buggy. That’s a relation.
Any source for this rather surprising assertion?
generics are feature that requires code to implement; code can contain bugs.
But a self-hosting compiler with generics is likely to be less verbose (because generics) than one without, so it should be less buggy.
i guess you can’t prove it either way but IME the complexity of algorithms is more likely to cause bugs than verbosity.
I think Typescript is a straw man. Does this Go implementation of generics slow down the compiler a noticeable amount? There’s nothing inherent to generics that would make compiling them slow.
On the other hand, copy/pasted code is an ever increasing burden on developer and compile time.
You are imagining a code base where the same complex data structure is instantiated with two different types. Is that realistic?
Realistic enough that the Linux kernel developers went through the hassle of developing generic associative arrays, circular buffers, and other generic data structures using void*.
And so did Gnome with GLib, which provides generic lists, hash tables, and trees, along with several others structures, also using void*.
And the standard libraries of most modern languages include reusable and generic sequence and associative data types, and some times significantly more than that.
For most data structures, though, focusing on a single code base gives too narrow of a view. Generics allow libraries of data structures to be created, so even though a single code base only use one R* tree (or whatever), that R* tree library can be used as-is by any number of projects.
The Abstract and Background sections of the draft design doc touch on the motivations. Additionally, each section describing a dimension of the design usually mentions, at least briefly, the motivation for that feature.
Here is an example that I’ve wanted for ever, and can finally do. Higher order combinators that you can leverage first class functions with!
Generic map, in go
That’s the type of thing I have seen as a justification, but I don’t get why that’s so important. Can’t you just use a for loop?
“Can’t you just …” goes forever. “Can’t you just write your for loop with labels and jumps in assembly?”^^
For me, it’s all about abstraction. Having low level combinators, like this, that I can compose to build higher level abstractions in a generic way is wonderful.
^^: See also whataboutism.
I’m not sure that composing from higher level abstractions is always such a good idea. I like both Go (hobby projects) and Rust (work!) but I still fell that most of the time I prefer this level of abstraction:
from this:
Don’t get me wrong, i like type level guarantees and I can see flexibility here, but in my experience with c++, rust and haskell is that generic programming often ends up complicating things to a degree that I personally don’t like.
I think this is going to be a balance that the community has to find. I don’t regularly program in rust, but I’d be quite surprised if it wasn’t possible to get something close to the Go http API in it. The example you pasted seems complicated for the sake of being complicated. In theory, the Go community has been drilled into thinking in terms of the littlest abstraction that’ll work, which maybe makes it possible to generally avoid generic APIs that don’t actually need to be?
“Can’t you just” does not go forever. It is a simpler way to say that the alternative is not significantly harder than what’s proposed. Is there some type of task that would be doable using a generic map but unreasonably hard using for loops?
I feel like Go was designed from the ground up to be written in an imperative style, and composing first order functions is more of a functional style of coding. If I understand, without generics you would be nesting for loops rather than composing map functions, which is no more difficult to understand or write.
I don’t follow the connection to whataboutism.
I think it’s fine for your style of writing code to be to use loops and conditionals instead of map and filter. I think it’s a fine way to code that makes more sense in an imperative language. Straight for loops and while loops with if statements inside them is just better, more easily understandable code in an imperative language, in my opinion, than
.map(...).filter(...).map(...)
etc.Incidentally there is a repo wherein Rob Pike expresses his attitude towards this style of coding:
https://github.com/robpike/filter/
I mean… that’s like … one man’s opinion… man. See also.
Generics are going to create a divide in the Go community, and it’s going to be popcorn worthy. There’s no point of adding Generics to the language if this filter thing “shouldn’t be used,” and the community rejects the abstractions that Generics provide.
This divide is easily already seen in the community as it relates to test helpers. On the one hand, there’s a set of developers that say “stdlib testing is more than enough.” On the other hand, there are people who want the full testing facilities of junit, with matchers, lots of assert style helpers, etc. Who is right? They all are, because those things work for their respective teams and projects.
This general dogmatic approach to language idioms is why I call it “idiotmatic” Go.
I suppose if Ken and Rob wanted generics they would’ve put them in the original language, and there wouldn’t be this controversy. Time to go back to learning Erlang which seems old and dusty enough to not have big language changes and drama.
You can’t pass a for loop to anything, you can only write it where you need it. Sure, toy examples look like toy examples, but the fact remains that Go has first-class functions, which should be a nice thing, but it doesn’t actually have a type system rich enough to express 90% of the things that make first-class functions worth having.
right, so the example code could be done with a for loop no problem. is there a more motivating example?
how do you mean?
Consider composing multiple transformations and filters together. With multiple for loops you have to iterate over the array each time, while by composing maps you only need to iterate once.
Just compose the operations inside the loop.
That works in some cases, but it’s pretty easy to find a counter example, too.
In terms of collections, the truth is most of the time a map/slice is a good option. Here’s my top two favorite use cases for generics in go:
Oh man. I hadn’t even considered a better way to do error handling, eg. a Result type. People are gonna get so mad.
Generics isn’t enough to do what people want to do with error handling 99.99% of the time, which is to return early. For that, you either need a macro, such as the aborted
try
proposal, or syntactic sugar for chaining such “functions that compose over them” (like Haskell’sdo
notation).Otherwise you end up with callback hell à la JavaScript, and I think nobody wants that in Go.
I was more thinking of something where the
if err
pattern is enforced via the type system. You’re still not getting there 100%, you could get reasonably close, with a genericResult
type that panics when the wrong thing is accessed, forcing you to check always or risk a panic.And of course it’s easy to question why this is interesting until you do chaining of things, and get a full on, type safe
Result
monad.The alternative can be seen in things like this where you skirt around the fact that you can’t generically accept a value in one of those called functions. This is also why I said people are going to get so mad. These things are confusing to people who haven’t dealt with them before, and will make Go much more expressive, but less easy to grok without effort.
All of them. Copying code manually is one of the worst things you can do in software development. If it weren’t, why even bother writing functions, ever?
The fact that libraries exist that don’t customise in such a way in languages with generics would disprove that notion.
For me that’s “making things unreadable for whoever comes after you”. And sometimes copying a bit of code is the optimal solution for avoid that.
I disagree with your implication that the use of functions means code should never be copied. For example if you want to use strlcpy() in a portable C program, it makes more sense to put a copy in your source tree rather than relying on an external library. An extra dependency would add more headache than just copying the code.
That’s why I said “almost always.” And remember that the existence of a library doesn’t mean it is used with any frequency.
Suppose you have a structure parametrizable by types T1, T2. You’re writing it in Go, so you assume that it’s ok if T1=string, T2=int. Also, in some of the places, you were using int for purpose unrelated to T2 (ie. if T2=foo, then there are still ints left in the source). Another programmer wants to copy-paste the code and change some types. How does he do it?
I think “this would make copy-pasting code harder” is not so compelling an argument. One of the major points of introducing generics is that it would eliminate much of the present need for copy/paste in Go.
Yes, I’m just counterpointing the parent commenter’s argument. I know the value of generic structures.
Yes it would be harder than a search-and-replace, but that is still abstract and unrelated to any real-world use case.