1. 25
  1.  

  2. 6

    Sort of a shame that something like this won’t move anytime soon because Go is pretty big on language stability.

    I don’t care much about language changes trying to eke out a bit more performance or conciseness, but detecting mistakes is always awesome, and const and non-nillable references are a couple of things that seem to help a lot across different languages. It also seems natural to have them in the language itself as opposed to tools built on top, since they let you tell the compiler (and users of your code!) your intent in the type signature.

    (A third thing that falls in this category for me is enums–think interface{}s that are declared to be one of a given list of types. That’d also allow compile-time checking for an exhaustive match when you type-switch on them: you have to match all the types or have a default: case. There are already times that an interface{} is documented to be one of a list of types; might as well have the compiler check that, and documentation in the code.)

    As a minor thing, not all (result, error) pairs are natural fits for entangled optionals in Go; I/O calls can return partial results and errors, for example. Does not change that they would be useful in many cases.

    1. 1

      Sort of a shame that something like this won’t move anytime soon because Go is pretty big on language stability.

      I’m only being slowly dragged into go, but I wonder if this is really the case. From my limited experience of reading Pike, rsc and the other Go folks, it seems like they’re pragmatists first. Adding this feature in such a way that it lives beside the source - rather than inline with it - seems the most likely way to push it forward.

      Hopefully I’m right, because this would offset some pain for sure.

      1. 1

        Agree that a rewriter is the best way for someone like to prove an idea workable, but right now the core team seems much more interested in working on the implementation and tools than integrating new things into the language. You could at least add non-nillable pointers in a backwards-compatible fashion in theory, but spec additions of any sort are not where most of the work is happening.

        A talk by Brad Fitzpatrick goes into some detail about this. Note that the “Go 2” section is just messing with you, as you find out on slide 32.

    2. 2

      Having written a fair amount of Go this looks like a great idea. There are some subtleties around interface{} values with nil contents and I don’t love the syntax but directionally having more safety in Go would be really, really good.

      1. 1

        This continues to be refered to as a “billion dollar mistake” but as far as i know thats anecdotal at best. It seems to have been adopted as “truthy”. My own experience with design by contract, i.e. Hoare Logic, and clear transfer of ownership informally specified, is sufficiently cost effective and necessary, no matter what language features provide. (But i’ve nothing but a few decades of my own anecdotes to support my claims.)

        Eschewing nil moves the problem of three-valued logic elsewhere in the design. Thats well and good, and often needed even with this feature. For example, the need to account for “not yet available”, “missing data”, and “bad data”.

        I’d like to see these language-level kinds of efforts continue. But I’m satisfied the core team won’t rush to change unless there’s clear evidence of a big net gain. And i’m fairly certain that good design thinking more or less independent of the language itself will continue to be the most significant, cost-effective path to high quality.

        1. 3

          Eschewing nil moves the problem of three-valued logic elsewhere in the design. Thats well and good, and often needed even with this feature. For example, the need to account for “not yet available”, “missing data”, and “bad data”.

          The nice thing about remove nil from being a value that inhabits all types (or reference types in this case) is you can make separate types for each of these cases, which is pretty convenient.

          1. 1

            The significant effort is in designing the states, behaviors, and transitions. That’s independent of language. To the extent that specific language mechanisms support that, and don’t have inordinate costs, great. I’ve yet to see hard evidence of the cost of nil or the cost/benefit of this feature.

            “Pretty convenient” is subjective and may have as much to do with personal preference or popularity as it does with engineering. I suspect even more.

            Go’s interfaces and simple type system are more than adequate in my experience. I’d like to see more hard evidence about the “cost of nil” and the actual cot/benefit of this proposal.

            1. 4

              The significant effort is in designing the states, behaviors, and transitions. That’s independent of language.

              I absolutely agree, I was pointing out that removing nil from all types lets you treat things separately if they should be, one can always combine them again if necessary.

              I’ve yet to see hard evidence of the cost of nil or the cost/benefit of this feature.

              And you won’t. It’s just too expensive to find evidence of these things. We’re talking about something that, if the claims are true, is a small amount of gain at each point that would combine to a large gain. Software engineering lacks the metrics, discipline, and drive to measure such things well enough to say anything concretely.

              However, you can make an argument based on first principles. In general, most people argue that modular code is good. Being able to separate values that are correctly and completely constructed from those that aren’t certainly allows some modularity that not allowing that separation doesn’t.

              “Pretty convenient” is subjective and may have as much to do with personal preference or popularity as it does with engineering. I suspect even more.

              Sure, but a lot of things in languages have to do with that, so it’s not like the stances, for example, the Go authors take are more grounded in engineering than those who argue nil shouldn’t inhabit every reference type.

              Go’s interfaces and simple type system are more than adequate in my experience. I’d like to see more hard evidence about the “cost of nil” and the actual cot/benefit of this proposal.

              They are less than adequate in mine but, again, I think asking for hard evidence is just a convenient argument killer: it ain’t gunna happen. Partially because if you are ok with how to solve problems in Go then you probably are simply coming at programming from a different direction than someone like myself who finds it inadequate. We’re standing on different foundations. And partially because, for most programs IME, the cost of failure is not very high.

              But to look at it another way, Null being a billion dollar mistake is not that expensive amortized over 40 years of computer programs running. That just takes a couple big companies going down for a few hours because of a null bug.

              And a final point: for someone like myself this isn’t just about nil. Type systems that let one express an option type well have a whole slew of abilities that are valuable to someone like myself, such as the result type letting me express errors in a type safe way, and enum’s that can actually hold data. All these things come together to create a whole philosophy that someone like myself values.

              1. 1

                It’s just too expensive to find evidence of these things.

                There’s a significant gap between off the cuff remarks like “billion dollar mistake” and billion-dollar scientific studies. A good start would be even a back-of-the-napkin calculation. At least then there’s something that can be evaluated and improved upon.

                a small amount of gain at each point that would combine to a large gain

                Possibly, but the question then becomes is this a “small gain” or even a “small change”? If it is, the question then becomes is this the best small change to make at this time?

                Based on my observations in the industry over 35 years is there is far more to be gained for teams to improve their design skills and collaboration skills than any change in languages or language features. There is reasonable evidence for this, though. See Capers Jones, McConnell, Software Engineering Institute, etc.

                you can make an argument based on first principles

                One can argue for many individual “small-ish” improvements. One cannot argue easily they’ll add up to an overall net gain. As you pointed out, software engineering is complex. Not only is it complex, it is a complex set of interacting socio-technical systems with various feedback mechanisms. Even just technically, many small changes can change something as critical as compile time and compiler complexity. These affect the overall workflow and so affect the social / collaborative aspects of software engineering.

                most people argue that modular code is good

                That’s at least been demonstrated over time. And it’s been significantly studies over time. See Parnas for just one of many. But “modular code” and “eschewing nil” are not in the same ballpark. The cost of nil is good to study… I just personally am happy for others to run off and try it out before declaring it a “billion dollar victory”. Or even a measurable business-value victory for most teams.

                it’s not like the stances the Go authors take are more grounded in engineering than those who argue [against] nil

                A significant part of engineering in any discipline as been the experience and maturity of the discipline. Software engineering is no different. The overall Go approach is based on significant experience. Also adding this mechanism is something that can be studied and those studies can be improved. The decision to add this feature or not is far from a coin toss. The status quo is more grounded in engineering.

                asking for hard evidence is just a convenient argument killer

                Then all is lost.

                a billion dollar mistake is not that expensive amortized over 40 years of computer programs running

                Then this argument is made as rhetoric.

                Type systems express an option type well have a whole slew of abilities that are valuable to someone like myself

                I have no argument with that. These things amount to personal preference and popularity more than anything else. Fortunately there are languages that provide those things. We probably don’t need to have them added to every language right away. And fortunately the Go team seems resolute not to.

                Horses for courses. Let the best language “win”.

                1. 2

                  There’s a significant gap between off the cuff remarks like “billion dollar mistake” and billion-dollar scientific studies

                  I think you are focusing on the “billion” a bit stronger than myself. The talk from which the phrase gets its name is somewhat in jest. The point is really that a simple change (not allowing null) could have saved a common source of issues.

                  Based on my observations in the industry over 35 years is there is far more to be gained for teams to improve their design skills and collaboration skills than any change in languages or language features. There is reasonable evidence for this, though. See Capers Jones, McConnell, Software Engineering Institute, etc.

                  I absolutely agree. But even if null pointers are a $1 billion dollar mistake, the worldwide economy of software was estimated by one group to be $407bn in 2013[1]. I’m not sure what type of people you usually converse with on the topic of null pointers, but of those I do (who are rabidly against them), none of them argue that it is some sort of panacea of software cost. It’s just a simple change that can be made to a language that removes a class of bugs that do not need to exist. There is a large base of code (Ocaml and Haskell, at least) that can show software can be written in such a way that null is not a value that needs to inhabit every reference type.

                  As of now, almost every major language has some kind of optional type, or a mechanism to have references that are not null, or varying value. C++ has references and an optional type, Java has Optional, C# has had Nullable for years, TypeScript added support for Nullable recently, Kotlin has something similar, Ocaml, SML, Haskell have some kind of optional type, and Rust as well. I’m sure there are more. There seems to be a collective experience moving towards distinguishing between an value, and one that can also be null. I’m not sure the how the Go-decision is more grounded in engineering than every other language.

                  How hard of evidence would you require to decide that nil is not a value that is worth inhabiting all reference types? As a comparison, Go has GC and dynamic dispatch, two things missing from C. I wager that you are content with both of these features being in Go, but I’m not aware of any economic evidence to support them. The world has mostly agreed that GC is a good thing, even if we don’t have hard numbers to back up that claim. But, there exists a small fraction of people who believe that GC is adding value. How did you come to the conclusion that the current state of Go is worth being the status quo? Or is it purely the economic claims around NULL that makes you hesitate?

                  [1] https://en.wikipedia.org/wiki/Software_industry#Size_of_the_industry

                  1. 1

                    My argument against the eschewing of nil is based on the inordinate attention it receives relative to the potential value. I suppose personally the current discussion dredges up past ones with Dart. That team was not then at least as resolute as the Go team. I investigated Go initially for that reason.

                    re: GC and dynamic dispath, they have each paid their way many times over. They first appeared in research languages, and slowly made their way into the mainstream as both the cost and benefit became manageable.

                    The economics of time spent managing memory, fixing bugs, maintaining pointers, etc. were well understood. These mechanisms were added late in the game relative to when their economics were understood.

                    1. 1

                      re: GC and dynamic dispath, they have each paid their way many times over. They first appeared in research languages, and slowly made their way into the mainstream as both the cost and benefit became manageable.

                      Is this the “hard evidence” you require for an option type? The option type has gone through the same pgoression you have described, showing up in research languages and, as I pointed out, now in a wide range of mainstream languages. The cost of Nullable is very close to zero, looking at C#, Java, and TypeScript as an examples. All three of these languages have added the idea of an option type late in the game (TypeScript, admittedly, being a new language still).

                      I believe you might be over-estimating the cost of adding an option type, or similar, to a language in your analysis. There are multiple examples of it being very small. It also, clearly, reduces costs throughout a code base in the form of not needed null-checks.

                      1. 1

                        The cost / benefit of GC was studied for years in the late 80s, early 90s.

                        The cost of echewing nil in more complex languages like C# and Java is less noticeable, I suspect, than for Go.

                        Fortunately for you, you’re not in the minority these days generally in languages. Fortunately for me, the Go team seems to be diligent about changing their languge. We both win.

                        1. 1

                          I’m not aware of any economic studies on GC, however. Perhaps you can point me to some?

                          The cost of echewing nil in more complex languages like C# and Java is less noticeable, I suspect, than for Go.

                          I don’t see why that would be true (note that I am slightly lying because Java doesn’t even do that good of a job at eschewing nil).

                          Fortunately for me, the Go team seems to be diligent about changing their languge.

                          Most language authors are diligent about changing their language.

                          I’m not demanding that you declare nil is bad, I just think your general perspective seems to be one of “where I am sitting is good and deviation is bad” and supporting that perspective by asking for proof that I’d be surprised if they existed for the features you value.

                          Thank you for the discussion, though, I enjoyed it.

                          1. 1

                            I wouldn’t say deviation is bad, just that it has a cost and too often we pay insufficient attention to the costs and whether there might be more cost effective alternatives.

                            As for GC economics, see some of the paper here and in the annual GC workshops in that time frame. Xerox and others had an economics focus in some if their pubs, but others as well.

                            https://www.cs.kent.ac.uk/people/staff/rej/gcbib/gcbibE.html

                            Thanks here too. Constructive discussion in somewhat long form isnt so common these days.