1. 56
  1. 12

    i’ve been really digging lwn lately

    1. 6

      Same. Just subscribed!

      1. 6

        oh you know I just realized all the posts I’ve loved recently have been @benhoyt. Good job, Ben!

        1. 14

          You’re welcome. Do subscribe – that’s how the (very small) team makes their living and how the site keeps going.

    2. 10

      This generic dust up kinda killed my interest in go. I was writing a fairly large project in go and I was really missing generics as I was coming from C#. I used a code generator to build the days structures I cared about to work around lack of generics.

      Go has some nice features, go routines, life memory consumption, but it just seems to lack ergonomics. I haven’t really payed much attention in recent years as .net core took away one of go’s large pluses for me (cross platform). And the speed advantages differences seen to have even out.

      I just can’t believe it’s taking “the smartest people in the world” (and I man that sincerely, some very smart people that I have huge respect for work on this language) this long to figure it out.

      That why the language architect supreme is Anders Hejlsberg.

      I used Turbo Pascal in college and have used C# and typescript extensively.

      Andres seems o be very pragmatic about problem solving and actually shipping language features users want vs hemming and hawing for years and years.

      1. 4

        Andres seems o be very pragmatic about problem solving and actually shipping language features users want vs hemming and hawing for years and years.

        It’s all about trade-offs. For example, let’s imagine adding generics would slow the compiler by a factor of 2; would that be a good trade-off? What if it’s 10 times slower? 100 times?

        Of course, generics is not going to make the compiler 100 times slower, but at which point do you decide it’s worth the trade-off? Things like compile speed and fairly simple semantics are also features people want. In general, adding any feature always involves trade-offs in many areas. One of the nice (not frequently enough stated IMO) advantage of Go now is that it’s fairly easy to write tooling for the language, as the syntax is fairly simple.

        Generics in Java, C++, and TypeScript are Turing-complete. C# avoids this by doing much at runtime, but this comes with its own set of drawbacks.

      2. 7

        rsc on the “three approaches” seems like they ignored a lot of prior art; C# is a mainstream language that has actual reified generics for quite a while, to say nothing of what the FP community has.

        1. 9

          The post you are referring to directly asks for help finding prior art. Also does C# style reified generics even make sense for Go? It requires a lot of runtime support.

          https://research.swtch.com/generic

          1. 7

            But at least they decided to work with Phil Wadler in the end.

          2. 4

            Because Go doesn’t support operator overloading or define operators in terms of methods, there’s no way to use interface constraints to specify that a type must support the < operator (as an example). In the proposal, this is done using a new feature called “type lists”, an example of which is shown below:

            // Ordered is a type constraint that matches any ordered type.
            // An ordered type is one that supports the <, <=, >, and >= operators.
            type Ordered interface {
                type int, int8, int16, int32, int64,
                    uint, uint8, uint16, uint32, uint64, uintptr,
                    float32, float64,
                    string
            }
            

            In practice, a constraints package would probably be added to the standard library which pre-defined common constraints like Ordered. Type lists allow developers to write generic functions that use built-in operators:

            // Smallest returns the smallest element in a slice of "Ordered" values.
            func Smallest(type T Ordered)(s []T) T {
                r := s[0]
                for _, v := range s[1:] {
                    if v < r { // works due to the "Ordered" constraint
                        r = v
                    }
                }
                return r
            }
            

            Hmm, to me this looks like the Smallest function would only accept the built-in types with a less-than operator <. Don’t you lose a whole lot of usefulness if you’re not allowed to write your own Ordered type which can be used with a Smallest function?

            1. 3

              Yes.

              People will forced to decide whether their ordering-based data structures (TreeMaps, RB-Trees, …) will use Ordered and therefore only work on a small set of “blessed” types, or whether they define their own, which … well … just imagine how great it will be to have two dozen replacement types for Ordered-for-my-own-types.

              (All modulo embedding, as far as I remember.)

              1. 1

                This is not true. Using the type syntax with builtin types allows for operator usage on all types (both builtin and user-defined) where the either the builtin type appears in the constraint list, or a custom type with the same underlying type.

                https://go2goplay.golang.org/p/59oHxvIEp0A

                This doesn’t work for struct types, but Go never had operating overloading in the first place, so no struct types would implement these operators anyways.

                1. 7

                  Yeah, but soc’s point stands: if you’re building an ordering-based function like Smallest() or data structure like a binary tree, you’ll either:

                  1. Choose the much more limiting Ordered constraint and use <, but then your function or tree will only be able to use built-in ordered types (or types with an underlying type that’s a built-in ordered type). That’ll work great for Tree(int) or Tree(MyInt), but won’t work for Tree(MyStruct).
                  2. Or you’ll have the constraint be an interface with a Less method, and you won’t be able to use the built-in types like Tree(int) at all … you’d be required to define a MyInt type that implements Less, and convert everything to MyInt before putting it into the tree.

                  Option #2 is definitely more flexible as it least allows any type to go into the map, but with the conversions it’ll be more boilerplate than you want for simple types.

                  The latest generics draft handles this by passing in a “compare function”, for example see Map in the Containers section. That kind of skirts the constraint/interface issue … but maybe that’s okay?

                  Here’s a Go2Go Playground example that shows what I’m talking about: https://go2goplay.golang.org/p/Rbs374BqPWw

                  1. 1

                    This is not true. Using the type syntax with builtin types allows for operator usage on all types (both builtin and user-defined) where the either the builtin type appears in the constraint list, or a custom type with the same underlying type.

                    Exactly as I said:

                    (All modulo embedding, as far as I remember.)

                    1. 1

                      I see. I misunderstood what you meant there because that is not the correct Go terminology. Embedding is only possible when defining struct and interface types. In the type newtype oldtype definition, oldtype is referred to as the “base type” and doesn’t inherit any of its methods, but does gain its operators.

              2. 4

                Will be interesting whether they stick with (type T) – I’d assume so, but this further decreases Go’s readability.

                I think the alternatives don’t have a lot of chances here:

                • <> – Bad choice, also NIH.
                • [] – But but fAmiLiArItY, also hard to pull off given the existing misuse of [] in Go.

                Nice to see though that Phil Wadler could talk some sense in them.


                Here are my cliff notes from the last time I read the current design proposal:

                Although methods of a generic type may use the type’s parameters, methods may not themselves have additional type parameters. Where it would be useful to add type arguments to a method, people will have to write a suitably parameterized top-level function.

                This is not a fundamental restriction but it complicates the language specification and the implementation.

                Preferring the convenience of a dozen compiler writers over solving users’ pains sounds like a bad idea.

                Especially considering that if it were added later, it would be a breaking change for every user who wants to clean up his code and make use of this feature.

                we introduce a new predeclared type constraint: comparable

                Not sure why this one written in lower case. (Skipping comment on poor naming.)

                It’s also kinda inconsistent with the approach to comparisons earlier on, which don’t get their special builtin constraint.

                Both ==/!= and the </<=/… earlier could have been dealt with less special casing by defining a suitable constraint for each with the corresponding methods, and – if desired – restricting the set of types that are allowed to satisfy them.

                The rule is that if a type contraint has a single type parameter, and it is used in a function’s type parameter list without an explicit type argument, then the type argument is the type parameter being constrained.

                Unnecessary.

                Therefore, we propose that the language change so that func(x(T)) now means a single parameter of type x(T). This will potentially break some existing programs, but the fix will be to simply run gofmt.

                Good.

                Values of type parameters are not boxed

                Good.

                It’s impossible for non-generic code to refer to generic code without instantiating it, so there is no reflection information for uninstantiated generic types or functions.

                Good.

                No covariance or contravariance of function parameters.

                Will be interesting to see how this works out in practice.

                No operator methods. You can write a generic container that is compile-time type-safe, but you can only access it with ordinary methods, not with syntax like c[k].

                Makes sense – this kind of syntax sugar has been a mistake in most languages.

                And, of course, there is no way to write a constraint to support either return nil or return 0.

                One could use a Zero constraint that is automatically implemented (and cannot be implemented manually) for all types.

                Lots of Irritating Silly Parentheses

                Agreed. Go is already looking … less-than-structured with its ident Type syntax and having () double for types only makes it worse. If the language didn’t add special builtin [] operators, they could have gone with [] for generics, which is generally the optimal design in bracket-based languages.

                The design has no way to express convertability between two different type parameters.

                Sounds like a job for a function from T1 to T2.

                We would need an untyped boolean type for operations such as ==(T) untyped bool.

                This is unclear to me.

                … untyped constants …

                Seems like they have caused nothing but trouble – was probably a mistake to add them in the first place.

                Map keys must be comparable, so key has the predeclared constraint comparable.

                This doesn’t seem to resolve the issues around floating point numbers, but they existed like this before. Could have been an opportunity to do better.


                Thanks for reading, and yes, it’s normal that the comment gets flagged almost immediately.

                1. 2

                  Will be interesting whether they stick with (type T) – I’d assume so, but this further decreases Go’s readability.

                  I think the alternatives don’t have a lot of chances here:

                  • <> – Bad choice, also NIH.
                  • [] – But but fAmiLiArItY, also hard to pull off given the existing misuse of [] in Go.

                  Are you arguing only about the syntax or about some semantic issues as well? If just syntax, then why in your view is [] better than <>?

                  1. 5

                    In general, using <> is really hard to lex. What stream of tokens do you generate for Foo<Bar<10>>? Until C++11, C++ lexers would generate the stream Foo, ‘<’, Bar, ‘<’, 10, ‘>>’. The last >> was parsed as one right shift token rather than two closing ’>’s.

                    You can either make the language really awkward to write by requiring a space between the ’>’s like C++ did before 11, or you can make a super complex system to feed parse information back to the lexer to lex “>>” as one or two tokens depending on context. Neither is an awesome solution.

                    1. 2

                      why in your view is [] better than <>?

                      1. <> is hard to read for humans
                      2. <> is hard to parse for compilers
                      3. It allows [] to be (ab)used for syntax “conveniences”

                      I wrote about it here.

                      1. 1

                        So, given a readable font, and a language where [] is used for arrays and/or can be overloaded for something else (like in Rust), there wouldn’t be a difference between [] and <>?

                        1. 4

                          If only proper angle brackets such as ⟨ ⟩ were usable (if they were easily typed, had universal font coverage, possibly were in ascii), I feel the programming world would be a much better place. Trying to imbue three different types of brackets (()[]{}) with many more than three meanings, often based on context, just makes things more difficult for everybody, language writers and users included.

                          So, given a readable font, and a language where [] is used for arrays and/or can be overloaded for something else (like in Rust), there wouldn’t be a difference between [] and <>?

                          I think the big difference is that even when abused for multiple different purposes, [ & ] still tend to come in matched pairs, which makes parsing them simple. < & > are comparison operators, which don’t usually appear in matched pairs, so if you have something like this < x > y > z, it’s much much harder for the reader, compiler, ide or vim plugin to tell whether this should be understood as ⟨x⟩ y > z or ⟨x > y⟩ z or whatever (That example is meaningless, but that’s part of the point: you can do a lot of sensible stuff with matched pairs of brackets without having the slightest idea what they mean).

                          1. 2

                            You still have the issue of bitshifts (<<, >>) and binary comparison operators (<, >, …).

                            You are still using something that is not a bracket – both as not-a-bracket and as a bracket. That’s not good.

                            Using () brackets for terms, [] brackets for types and not using <> as brackets, but as the thing people learned to read them since kindergarten: That’s good.