1. 22
    1. 18

      The author is confusing type definitions and type aliases.

      The type Color int syntax in Go does not make Color an alias to int. It defines Color as new type with int as the underlying type. The syntax for aliasing is type Color = int. (In Haskell terms, Go’s type Color int is newtype Color = int, and Go’s type Color = int is type Color = int.)

      This distinction is important because you can define methods on Color because it’s a new type; if it’s an alias of int that wouldn’t be allowed. In fact, this is exactly the same mechanism as type Foo struct { ... }, which defines Foo as a new type with the an anonymous struct as its underlying type.

      The fact that you can define type Color string and then assign a string literal to a Color-typed variable is not because Color is an alias of string, it’s because string literals are untyped and can be assigned to any type whose underlying type is string (written as ~string in a constraint). You can’t do this:

      var s string = "foo"
      var c Color = s // compiler error
      
      1. 10

        To get back to the original point - I’ve always suspected that the omission of enums (and sum types to some extent) has to do with Go’s (original?) focus on developing networked services. In a networked services you have to assume that every enum is open and closed enums are generally an anti-pattern.

        It’s a bit of an extreme position to force this on the whole language though, closed enums that don’t cross network boundaries are useful and convenient.

        1. 8

          What I’ve always suspected is that like everything else in Go it was started from C. C’s enums are useless, so they removed it from Go, which I think was a good idea (better not have enums at all than have garbage ones). One spanner in that wheel is that Limbo had sum types via pick (I think they were pretty shit tho, at least in terms of syntax / UX).

          I don’t believe closed enums to be an anti-pattern in networked services. They’re a great thing when generating as they ensure you can’t generate nonsense content, and on intake you validate them at the boundaries, just like you have to validate an “open enum” because the software does not handle every possible value of the underlying type, and you don’t want to end up in nonsense land.

          1. 2

            I speculate the motivation is exactly that Go doesn’t want to get in the business of doing validations - if something physically fits in a uint32, it can be put in a type with uint32 as the underlying type with an explicit conversion. It’s up to the programmer to decide how to do the validation by writing code.

            This makes a lot of sense in networked services. If a protocol defines a flag that is supposed to have one of 6 values, and the server now sees a 7th value it doesn’t know, failing isn’t always the correct thing to do. You may want to just fall back to some default behavior, log it, and so on.

            Sure, you can do that if the language provides validation for you, but then the language also needs to provide facilities for handling failed validations. And that sounds like something the designers of Go didn’t want in the language. Go would rather give you low-level tools than leaky high-level ones - it’s the same mentality that has led to the error handling situation.

            Exhaustive switches on enums are also a double-edged sword across API boundaries. Once you expose a closed enum as part of your API, there will be consumers who try to do exhaustive switches on them. This means that adding a new possible value is a breaking change, so all closed enums get frozen the moment you publish your API.

            Again, there is still a niche for closed enums - when it’s internal to a package, or when it’s part of a protocol or API that will literally never change, and I feel Go designers probably underestimated the size of that niche. I’m just speculating on why they decided to not have it after all - it probably came from a mentality that focuses a lot on network and API boundaries.

            1. 1

              This makes a lot of sense in networked services. If a protocol defines a flag that is supposed to have one of 6 values, and the server now sees a 7th value it doesn’t know, failing isn’t always the correct thing to do. You may want to just fall back to some default behavior, log it, and so on.

              Sure, you can do that if the language provides validation for you, but then the language also needs to provide facilities for handling failed validations.

              What sort of weird-ass scenario did you cook up here? There isn’t any of that, or any need for the language to provide validation. You do whatever validation you want when you convert from the protocol to the internal types, exactly as you handle invalid structs for instance, or just out of range values for whatever types.

              Go would rather give you low-level tools than leaky high-level ones

              That is an assertion with no basis in reality.

              it’s the same mentality that has led to the error handling situation.

              Love for being awful?

              Exhaustive switches on enums are also a double-edged sword across API boundaries. Once you expose a closed enum as part of your API, there will be consumers who try to do exhaustive switches on them. This means that adding a new possible value is a breaking change, so all closed enums get frozen the moment you publish your API.

              Which is what you want 99 times out of 100: most of the time the set of possible values is fixed (in the same way nobody’s extending the integer), and most of the rest you want to be a breaking change in the same way switching from 1 byte to 2, or signed to unsigned, or unsigned to signed, is a breaking change.

              In the few cases where you foresee new values being routinely added and that not being a breaking change, don’t use sum type. Or do that anyway if the language supports non-exhaustive sum types.

      2. 7

        Grug brain programmer not so smart, as it says on the tin…

      3. 4

        While you’re correct, the distinction doesn’t really matter to the complaint, even if the parameter is typed it’s just a conversion away:

        var s string = "foo"
        var c Color = Color(s) // now compiles
        

        so the point remains that:

        1. if the type is exported, any caller can subvert your carefully enumerated set of values
        2. because there’s no actual safety, the language can’t rightly check for exhaustive switches, so you’re left with a default an an assertion on every switch
        1. 4

          It is relevant because even though it doesn’t prevent deliberate conversions, it prevents accidental uses.

          For better or for worse, Go programmers don’t tend to demand absolute guarantees from the language when it comes to the type system. This is very different from the community of other statically typed languages, probably because a lot of Go programmers have a background from dynamically typed languages like Python.

    2. 7

      I can’t agree more with the author.

      Screw generics, give me sun types and exhaustive switch statements.

      I want tools that help me avoid stupid things. Go enums fail miserably at that.

    3. 5

      Sadly Go made the decision to require every type have a default value. I assume this makes it difficult to introduce variant types without picking some default arbitrarily. I see why they thought it was a good idea to have default values as C programmers, but it seems really unfortunate in hindsight to not forbid uninitialized reads using static checking instead. :/

    4. 5

      Spoiler Alert:

      Not a hard question. It’s sum types! (Or enums, tagged unions, or whatever you want to call them).

      Eh yeah I feel that Go’s enums (I’m not even sure I would want to call them enums) are poor, there is a lot of boilerplate needed to do string conversions which pretty difficult to escape if you need to return JSON that has a string value instead of an int.

      Though one thing that I disagree with is this statement referring to iota.

      but now I need to count on my fingers and toes to figure out the actual value of any of these constants

      I feel that you really should not be relying on the underlying integer value of the enum. Taking the below example from the article, if you have three colours defined, you should not try to refer to Blue as 2, instead use Blue.

      const (
      	Red Color = iota
      	Green
      	Blue
      )
      

      I recently had to take an existing 3rd party list of items that was zero indexed and use them in my code. Well I didn’t want the empty or zero value of the parameter to be the first item in the list – that could lead to unexpected data integrity issues. So I defined the first part of the enum as Unspecified, then used a map to convert between the enum value and the “real” value used by the 3rd party. Another reason for this effort was that the 3rd party didn’t have a continuous sequence of integers…

      1. 3

        Yes, iota is supposed to be used as an implementation detail. It should never be used for something that has to go over the wire.

        1. [Comment removed by author]

      2. 2

        I feel that you really should not be relying on the underlying integer value of the enum.

        I think the complaint about having to count to find the value is mostly relevant to the following bit: “there isn’t even support to quickly marshal these integers into strings (e.g. for debugging)” - if you print out something that includes Blue, you’ll get (say) 2 in your log output and then you have to start counting (or implement a converter which is faff once you have more than one enum or they change frequently.)

    5. 4

      They have beautiful beautiful sum types. They’re called unions in TypeScript, but they’re the same thing.

      They’re not quite the same thing, because union types don’t naturally handle members with the same payload e.g. int | int is int, you need wrapper types to differentiate types.

      Similarly you need a separate feature for payload-less variant, or additional capabilities e.g. lifting literals to types, or you need to define more bespoke types as stand-ins (unit types in this case).

      Because sum types are tagged separately from their payload, all these cases Just Work and mix seamlessly e.g. there’s nothing especially wrong about a Maybe Maybe Either Int Int (though odds are that would just exist at the program boundary and you’d have a flat enum internally).

    6. [Comment removed by author]