1. 42
  1.  

  2. 5

    I’ve done a good amount of haskell and initially really disliked go, but now I like that I spend less time working on beautiful abstractions and just type the ugly Go and move on.

    I really wish there was something like Go with pattern matching/sum types/generics.

    1. 5

      ocaml? how about reason?

      1. 3

        ocaml is close, but it seems too dead in terms of ecosystem / community, i think still doesn’t handle parallelism, i think reason does some work on my real but petty complaint that it’s kind of ugly to look at

      2. 1

        I really wish there was something like Go with pattern matching/sum types/generics.

        Rust?

        1. 5

          I started writing a wishlist that basically summed up a GC’d rust but removed it. The friction of threading lifetimes and such is pretty high when the performance of a GC isn’t a problem. Hoping that feeling goes away a bit as I get better with rust.

          edit: the list is basically: sum types, generics, parallelism, static binary, GC, green threads, pattern matching, strict, untyped-io, imperative with functional sugar like rust has

          1. 1

            So, a strict, impure, Haskell?

            1. 2

              Your description sounds like ML, which was a major design inspiration for Rust.

              1. 1

                I know very little about ML implementations, and I don’t know if there are MLs that have parallelism and green threads otherwise, yeah, I would have suggested ML. Perhaps OCaml fits? Someone familiar can chime in …

                1. 1

                  OCaml has something similar to green threads, implemented by the Lwt and Async libraries, but not yet parallelism which is underway but it will take time. But these concurrency libraries have tools to shell out computation to subprocesses, so while not terribly pretty, it can be worked around.

                  What I am quite interested in is the new effect system which puts an interesting spin on side-effects and how they can be handled in a type system.

        2. 1

          That was what Oden http://oden-lang.github.io/ was supposed to be. A “statically typed, functional programming language, built for the Go ecosystem.” – basically a functional language that compiles to Go. The guy who was doing that project gave up on it because it was too much work.

          I started toying around with creating an Elm to Go compiler in Go, but quickly found that to be too difficult (mostly the static type inference). I think the easiest thing to do would be to take the existing Elm compiler and retarget it to output Go code instead of Javascript. Problem is, I don’t know Haskell (which is what Elm compiler is built in). I’m not sure what’s more difficult – learning Haskell and the Elm compiler or building an Elm to Go compiler from scratch in Go.

          I think it’s definitely possible to have Java:(Scala/Clojure) :: Go:X where X has yet to be defined.

        3. 6

          I suspect the strong culture of code generation is largely because it lets you work around the flaws of the language.

          Definitely the case, because language was designed knowing that code generation works. Google uses code generation heavily.

          1. 9

            Works better than polymorphism? And even than polymorphism implemented by monomorphisation, which itself is essentially a form of code generation?

            1. 3

              Within the goals and ideals of the Go project, yes. Maybe not for other languages.

              You have to remember that Go set out to be a simple language for rapid development, with the minimum number of features to get most things done. There are 3 major options for generics, which of them makes the most sense for Go within its goals?

              1. no generics / code generation
              2. simple generics system
              3. complex generics system

              For 3, you break the goal of keeping the language simple. So that’s out for sure.

              Even with 2, you’ll complainers. There will be people who want to express something and can’t because the generics are too simple. So why even bother?

              Most people probably don’t even need code generation for generic code. If type asserts on interface{} are genuinely too slow for you—as opposed to offending your delicate sensibilities w.r.t. “slow” code—then you can afford to put a little extra effort into that hot path.

              With code generation, you can write a system as complex and weirdly specific as you need for your use case, if you really really really need it. The language itself can remain simple, easy to understand, and easy to use for the common case. It’s not a perfectly elegant solution, it’s a get things done solution, which is what Go is all about.

              1. -5

                yes

                1. 0

                  P.S. He asked a pretty broad vague question that totally depends on the situation, and implementation and as such it is not easily answered effectively.

                  The yes/no answer was sarcastic.

                  1. 8

                    You’ve been here more than long enough to know that the culture really frowns on flippant, uninformative responses, even to bad questions. One-line jokes are pretty much always mostly noise, rather than signal, even when people get them.

                    I don’t know what’s going on exactly but I’m sure you can do better.

                    1. 0

                      I disagree, one line jokes can have the highest signal to noise ratio of any comment if they are delivered well and hit a salient point.

                      This one obviously wasn’t though, because it required a followup to explain it.

                      1. 2

                        They can when they deliver information that helps recipient understand the point being made. That comment didn’t. Here’s an example of one that does:

                        Person A: If it’s FOSS, more eyes make sure there’s no vulnerabilities in it. People catch easy stuff quick. Just use OpenSSL so you have nothing to worry about.

                        Person B: The Heartbleed bug shows otherwise.

                        Person B didn’t do much in explanation since Person A is pushing propaganda that’s easily debunked. Not doing their own research either. However, the one-liner gives a phrase Person A can Google to find their position was incorrect. That’s a one-liner with signal. Lobsters are fine with those since they save time and space.

            2. 5

              Zero values are almost never what you want

              When I started with Go, I thought the same thing: zero values are almost worse than uninitialized memory.

              However, after using Go for a while, I’ve changed my tune. For an imperative language, zero values are amazing. Provided, of course, you intentionally design your zero values to be useful. It’s somewhat akin to nil punning in a Lisp, or zero-dimensional arrays in a scientific language with APL-like types. You can write more general code with less case analysis by treating a zero value as a useful case.

              For example, slices are represented internally as pointer/length pairs, but a nil/zero slice is a perfectly value slice to append to. No need to allocate or shuffle around an empty slice on the heap, but still one code path.

              Similarly, a structure of configuration flags has all default values if you handle the zero values appropriately. The downside of this is that you may have some EnableFoo flags and some DisableBar flags, but the upside of that is that you always know what the default value is. It’s only really problematic if 0 is a valid value for something that you want to be both configurable and defaulted to non-zero, but that’s surprisingly rare.

              Unfortunately, Go screws this up with many things in the stdlib. Even maps get this wrong: You can append to a nil slice (and it will alloc/grow), but you can’t associate in to a nil map!

              I’d argue that you probably want both nominally typed, closed structs with zero values like in Go AND structurally typed, open/extensible records without zero values (like in an ML).

              1. 7

                I think zero values make a ton of sense sometimes. This idea, ironically, has been deeply explored in Haskell. The local optimum there is the Monoid typeclass. In particular, attempts to make a superclass of Monoid which only allows for a default value and doesn’t expect it to be the “zero” of a meaningful, associative binary operator have failed.

                Why?

                Because of exactly what you’re poking at. Zero values don’t make sense in isolation, but they make a ton of sense when associated with some kind of operations for which they play a default/zero role.

                I’m glad Go profits from this idea sometimes, but it still feels ad hoc and a touch awkward the way it is used.

                1. 2

                  I’m familiar with monoids and their uses. To me, “default” implies something you can change. The identity element of a monoid is closer to some notion of “missing”, rather than a notion of “default”. A notion of default also requires an accompanying model of time, so your statement about the Haskell community’s findings is unsurprising to me for default values outside of a particular monad.

                  I did some quick Googling to refresh my memory on Haskell’s Data.Default and found this article: http://phaazon.blogspot.fr/2015/07/dont-use-default.html - I mostly agree with it! However, I’ll note that if you had a monad equipped with some kind of generative/allocation operation, you could create a useful law for default values: Two independent allocations should produce identities containing default values that are behaviorally equivalent. This means that you couldn’t allocate an integer and then expect it to be a zero for addition and a one for multiplication.

                  I also took a look at the various Haskell concurrency packages. For example, Control.Concurrent.MVar has newMVar that takes an initial value, but no allocation function that uses Data.Default or similar. Maybe that exists somewhere else within the Haskell ecosystem, but maybe not since it may require first class or dynamic types to be useful? In imperative programs, it’s frequently useful to be able to allocate an object of a fixed-size / known-type before you know what operation you’re going to perform on it. Similarly, it may be useful to “clear” a pre-allocated object. Consider a game that pools objects: You may want to allocate 50 monsters on startup and reset them to default state after they are killed and respawned.

                  EDIT: Should also add that, outside of an allocation operation, default values are useful for making lookup operations on collections in to total functions. In that case, the model of time is usually one of a loop or whatever state monad you embed the data structure in to.

                  1. 1

                    I think the notion of time is already well contained within monoids (perhaps more clearly by thinking of monoid actions instead of the essential data structure). This idea tends to drive semantics of monoids more often than not when I’m working in domains particular to my goal instead of general to a library. In this case, the zero is exactly the “default” value and then each monoid value (or action) describes destructive changes which occur in sequence. Associativity is exactly what you want to give this a sense of “time” instead of forcing it to be viewed as a branching tree.

                    Behavioral equivalence I think is also exactly what I’m saying is important about and default value. You are using a monad to provide the notion of behavior (in which case, from a typing perspective, I would want to see the type of the values indexed by the monad of interest to force the connection).

                    Monoidal behavior equivalence just states that the zero must behave identically (and “default”-ily) with regard to the monoid operation. You could definitely find similar behavioral-niceness properties for other kinds of operations. In practice, it seems like many of these end up looking like “monoids with extra stuff bolted on” at least in my experience.

                    In Haskell, Data.Default is largely shunned. It doesn’t much imply what kind of behaviors should relate to this default value making it very difficult to package up into a library. It might not be too uncommon in applications where behavioral-niceness is obvious in context, but even there, personally, I tend to try to make whatever I’m defaulting into a monoid if possible.

                    The other mechanism that tends to get used is just providing the default values outside of a typeclass with a descriptive name like defaultPort or something like that. More verbose, but forces the reader to connect “default” to the domain that gives it context.

                    Defaults on partial returns also just feel like something that ought to be a parameter. lookupOrElse and all that jazz.

              2. 2

                In Go, you import packages by URL. If the URL points to, say, GitHub, then go get downloads HEAD of master and uses that. There is no way to specify a version, unless you have separate URLs for each version of your library.

                This is a criticism of the go get tool, not Go in general. You can easily specify any version you like by checking it out in $GOPATH using git/svn/whatever. go get is just a convenience for quickly getting started with a package.

                1. 3

                  This is a criticism of the go get tool, not Go in general.

                  It’s a criticism of the Go ecosystem. If the response to that criticism is “yes, but you can use general tools to manually manage package installation instead of using the tools provided by the ecosystem”, then it sounds like a valid criticism.

                2. 1

                  I do find it a little strange that gofmt has been completely accepted, whereas Python’s significant whitespace (which is there for exactly the same reason: enforcing readable code) has been much more contentious across the programming community.

                  Python simply didn’t go far enough. And, really, the argument for significant white space was always pretty weak from the Python side of things, if you ask me. It was someone distilling a preference on everyone else just because.

                  The Go team used performance in the compiler as the one and only reason, marketed that, and then talked about the side effects of every code base being the same and the benefits that come from that result. This approach is still opinionated, but was the result of a trade off of flexibility vs speed. Speed always wins arguments if clarity is not reduced, and in this case, it wasn’t, so everyone jumped aboard.

                  It didn’t hurt that every aspect of the format was encoded in a really fast, and trivial to use tool though.

                  1. 13

                    How would Python go farther? Connecting the lexical structure to white space is common in FP PLs.

                    Related: it seems like a lot of Go’s culture works by convincing users that simpler language/compiler implementations are actually in their interest even when they aren’t. Also, I suspect the Google halo effect is at work here. Tech needs to quit lionizing megacorps.

                    1. 7

                      It’s less that people lionize “Google” but they lionize Rob Pike, Robert Greisemer, and Ken Thompson, three engineers who are legitimately good at what they do.

                      1. 12

                        The engineers who built Go are competent, no question, but the language was heavily influence by Google, its culture, and the needs of the organization.

                        Go was designed to be a Big Corporate Language. Most notably, it assumes that the programmer isn’t smart enough to handle generics. It’s designed to enable large codebases, written by average programmers, that aren’t clusterfucks. That’s a useful objective. It’s important. However, it’s not as exciting to me, so long as I’m in the context of an above-average programmer working on projects that are nowhere near a million lines of code. (That’s a “Maserati Problem” if you don’t have Google’s level of resources.) So I tend to value the deficits of the language, and the cultural problems [1] that I see likely to come from it, more heavily than the speed of the compiler and strength of the tool chain.

                        [1] For a flagrant example of cultural problems around a language, look at Java. The JVM isn’t that bad and the Java language itself, while it isn’t great, can’t be blamed for the VibratorVisitorFactory patterns and the ORM monstrosities. You can’t pin that on the designers of the Java language, but these cultural factors are entrenched enough that many Java codebases are unusable.

                        1. 5

                          The engineers who built Go are competent, no question, but the language was heavily influence by Google, its culture, and the needs of the organization.

                          Maybe you could say that, but really, Go is like the third iteration of Pike’s NewSqueak. I’d say it’s likely less influenced by Google, and more trying to regain some of the culture of Bell Labs. But I’m certainly not qualified to speak to that!

                          1. 6

                            One quick point that isn’t strictly correct:

                            it assumes that the programmer isn’t smart enough to handle generics.

                            That is not the assumption for generics. The language has generics (for blessed types). The language does not have more user accessible generics because it would either bloat compile times or the runtime.

                            1. 2

                              I may have misinterpreted the reason for the lack of generics.

                              I can see how generics would add time to compilation– in particular, type inference– but how much does it hurt run time performance to have them? I’m not sure that I see why that would be inevitable.

                              1. 3

                                Fully reified generic runtimes, such as the CLR for .net, create and load new types for each generic type or parameter the compiler emits. This requires a bit of extra metadata in the IL, and much more complexity in the JIT compiler to allow for dynamically loading new types at runtime.

                                The argument isn’t really about performance, I think the CLR is pretty fast considering, but more around how complicated the runtime ends up being.

                                1. 3

                                  Honestly I think the class of runtimes that include a JVM/CLR style virtual machine with a JIT is just not anywhere in the design space Go is considering, so they haven’t explicitly argued against that approach to implementing generics. The discussions around generics (esp. a few years ago when it seemed like a more open question that they might be implemented) assumed that the two approaches that might be implementable within Go’s design were either ML-style parametric polymorphism (with a runtime performance hit) or C++-style monomorphization (with code bloat and a compile-time hit), and they didn’t like either option, and couldn’t come up with a suitable variant to finesse away the problems.

                                  1. 3

                                    C++-style monomorphization (with code bloat and a compile-time hit)

                                    Could you help me understand this? I alluded to it in an earlier comment.

                                    How would monomorphization add more code bloat and compile-time hit than the current code generation approach? They seem equivalent to me, besides the fact that code generation happens manually outside the language (and so is strictly a worse approach, it seems to me).

                                    1. 5

                                      They are only equivalent in theory. In practice, code generation has a much higher barrier to use than simply writing a generic function.

                                      1. 2

                                        Besides having higher barrier to use, the lack of standardization and composability in the code-generation solutions seems to discourage some of the C++ style blowup from layers of templates, where templates expand templates which expand templates, etc. Instead it seems the norm is to roll more of a non-general flattened solution through code generation, which works in a specific case, and needs the generator to be modified if your requirements evolve and you need different cases later. A legitimate set of tradeoffs imo, even if it’s not usually the one for me.

                            2. 1

                              It’s actually a product of Wirth’s philosophy merged with some others. One of the authors said it was heavily influenced by his experience with Oberon-2. That’s a Wirth language that puts simplicity above everything, has a GC, compiles very fast, and has decent performance. Go seems like an industrial take on Wirth language mixed with some others.

                              My favorite of the Wirth style was Modula-3 competitor to C++ and Java. It crammed quite a few capabilities into a language that was still not that complex. Had first, standard library wity some properties formally verified, too. One team wrote an OS with it, SPIN, with type-safe linking of user mode code into kernel for safe accelerators.

                              https://en.m.wikipedia.org/wiki/Modula-3

                            3. 2

                              Sure, and they are. But it shouldn’t distort criticism of what they make.

                              See also: “I never liked writing tests anyway, and DHH said it isn’t worth it.” I guess I have an allergic reaction to this sort of unthought that gets a disproportionately loud voice in industry despite being somewhat disengaged to begin with.

                              1. 5

                                See also: “I never liked writing tests anyway, and DHH said it isn’t worth it.” I guess I have an allergic reaction to this sort of unthought that gets a disproportionately loud voice in industry

                                I’ve come to realize that corporate programming is the Fake News of STEM. You can say anything about programming or hiring, call it One Weird Trick, and it’ll be a #1 hit on Hacker News and the YC partners will say that you’re brilliant. It’s all fads and the shuffling of deck chairs.

                            4. 3

                              I can’t count how many variations of PEP-8 I’ve used over the years, or how many people fight over tabs vs spaces, and the problems that that causes. Going farther would force the issue–tabs only, or spaces only. Force the way argument lists, dictionaries, lists can be formatted, etc. Convention for style works until it doesn’t, and then the significant white space that was supposed to be easy to read, no longer is.

                              1. 1

                                Forcing tabs or spaces only is the logical next step. Python’s rules for mixed tabs/spaces files are weird, but there’s no way to make them sane, and it isn’t worth the effort.