Macros can be extremely helpful, especially for certain classes of problems. The Rust Macro Guide is an interesting read, if you have the time. If you are thinking only of C macros, consider that there are hygenic macros which make everything happy and wonderful.
Functions are the primary tool that programmers can use to build abstractions. Sometimes, however, programmers want to abstract over compile-time syntax rather than run-time values. Macros provide syntactic abstraction.
Honestly, the only thing that worries me about Go is the garbage collector. Not that it has a GC, but that its GC is so young. Java’s GC was garbage (heh) when it was first introduced. But it has the advantage of 10+ years of intense optimization. CMS is a great GC nowadays, and G1 looks very promising.
I’m positive that Go will eventually arrive at a great GC too…but I’m worried it is going to take the same 10+ year trajectory to get there. I think we will start to see more people complaining about Go GC issues once companies/products really start to push the language with high-performance apps.
Java, Python, ML, Smalltalk and Lisps have an object-graph memory model; while this has big advantages, it also makes their overall performance really sensitive to the garbage collector’s performance. Go’s memory model is from C, and it’s an object-embedding memory model, but without the object-slicing problem that C++ has from combining object embedding with subclass extension. This means the garbage collector is dramatically less important to Go’s performance.
Interesting, I didn’t know this. Do you have any other resources/reading to look at? I’d love to dig into the technical details about how these two different memory models work.
I don’t know of any good essays about this, although I’ve started a draft of one to post to kragen-tol. I don’t even know if someone has already invented terminology for it. Even “memory model” is a bad term, because it already means at least two other conflicting things (one from multiprocessor concurrency, and one from the MS-DOS days of segment:offset).
There’s a third common “memory model” other than object-graph and object-embedding, although I mostly don’t recommend it: parallel arrays, as traditionally used in Fortran but also found in APL, J, and K. Beyond that, there are a multitude of other possible ways of organizing memory, but most of them are less useful in some way.
Describing the Java or Python heap as an object graph is established terminology, but part of what I’m working from here is Allen Short’s insight in describing Python as an “object-graph language” the other day on IRC.
This was my big initial worry with Go, but I have reasons for optimism.
In just two years, Go has gone from having a conservative GC in 1.0 to having a fully precise GC in 1.3, with major improvements to parallelism along the way. The unsafe package means a generational/compacting/concurrent GC may be a ways off, but unlike many open-source languages, the Go team is fully capable of shipping a first-class GC.
It’s also easier to write GC-friendly applications in Go, in large part due to the control over memory layout and allocation. (The fact that you can trivially write a benchmark and measure memory allocation for your code doesn’t hurt, either.)
Finally, most of the big-heap problems which cause grief for GC’d languages are best solved by managing use-specific memory pools off-heap. If you’re willing to pay the cgo tax and copy data, you can pretty easily leverage the existing ecosystem of highly-tuned in-memory data structures to store data off-heap while keeping your Go-managed heap nice and light.
I’m curious is if this is actually an issue for two reasons:
The knowledge / research that defines how the Java garbage collector works is widely available (afaik), which means the Go maintainers can make use of it where appropriate.
It might be that Go’s programming model is “easier” to build a GC for (I have no idea).
I agree to an extent. The knowledge about Java’s GC has been well documented and transcribed. Future systems will have an easier time building robust GCs, but there is still a lot of work to be done.
For example, Go is only just now getting a precise GC, which is a mandatory pre-req for a higher performance generational collector. If Go wants to be a serious server language, it should really offer a pauseless collector too…which means it will also have to develop a concurrent, compacting, generational, precise GC like Azul.
Does this mean that ultimately, many of us will be bitter about how lousy Go is, and wish for something better? That’s a bummer, Go is really quite nice. But it’s awesome because eventually, there will be something so fabulous it makes Go look like garbage. :D
Also regarding Go and generics, I see the usefulness/complexity thing cited everywhere, but how does Go as a language help avoid needing generics? Slices, maps, and channels are generic, is that really all you need to do things?
Whining about a language is a waste of everyone’s time. If you have something useful to contribute, do so. Plenty of people have given detailed, reasoned, substantial accounts of their experiences with Go, good and bad. Maybe it wasn’t clear, but that sort of answer is the kind I was inviting.
I’d argue that constructively criticizing things is an important thing to do. If criticism isn’t constructive, then you have little hope of the right people reconsidering what they think they know.
Your comments in this thread are certainly not constructive.
Taking this discussion meta, how would you classify your comments as not smug? I agree that criticism is important, but I don’t see how you have done anything but call Go garbage and list languages with generics of some variety.
In what situations? When you are writing your own generic containers you run into a problem, but otherwise when you you actually need more than an interface? The built in containers are quite capable for most applications. If you truly need a specific data structure, how often are you going to need to use it on any possible type? A database may use a skip list for good concurrency working with tuples. A text editor may use a rope for working with strings. I would argue that using those specialized data structures generically in every situation is marginally useful.
The Go developers argue that generics are not adequately useful to be added, and I haven’t seen a compelling example that indicates otherwise, do you have one?
The built in containers are quite capable for most applications. If you truly need a specific data structure, how often are you going to need to use it on any possible type?
I want a Tree data structure, maybe I’ll only use it on 3 different types of values in my application but why:
Should I have to write it myself rather than rely on a library?
Should I have to either write it 3 different times or give up type safety by casting?
Should I know about what’s in the Tree when I don’t need to?
The 3rd point might not seem like a problem but it has huge implications, documented in a concept called parametricity.
I would argue that using those specialized data structures generically in every situation is marginally useful.
It’s not just data structures. Functions shouldn’t have to be unnecessarily specialised. If I need to write a function like so:
func idInt(a int) int {
return a
}
Why should we care about whether it’s an int or a string or a gobbledok? Our alternative is to use an interface:
func id(a interface {}) interface {} {
return a
}
But now we know nothing about the output type when we go to use it. We’d have to cast - will that work? Only after knowing exactly the definition of the function do we know for sure.
I hope I don’t have high standards by refusing to use a system which doesn’t allow using types as documentation or which asks for information it doesn’t need nor use.
I am familiar with all of the concepts you described. However, Go emphasizes simplicity. I haven’t ever heard someone claim that they encountered a serious problem with lack of total parametric polymorphism that so critically impeded their ability to write their application that they would trade away the simplicity of Go. Go was designed for a specific purpose, to write concurrent applications that are easy to reason about, both in terms of their concurrent aspects and performance as a whole.
It’s not just data structures. Functions shouldn’t have to be unnecessarily specialised.
This is really more of a philosophical point than a practical one. While such a feature is nifty, it’s not that useful. We can talk all day about how neat monads are (really neat), or how map is so more elegant than a for loop, but at the end of the day when you are writing an network server chewing through tens of thousands of requests per second, reasoning about the exact performance characteristics of a for loop is easier.
I want a Tree data structure
Why? Why not use a map? There are plenty of ways to use trees besides maps, ways that involve custom, non-generic logic to create and maintain the tree to reap some performance benefit or other desirable characteristic.
I hope I don’t have high standards by refusing to use a system which doesn’t allow using types as documentation or which asks for information it doesn’t need nor use.
You have different priorities. You aren’t wrong for your priorities, and your priorities don’t make all other priority sets wrong. You seem to be claiming that the lack of generics is so crippling that Go is trash not worth using, by anybody. That’s how I read your tone anyway. But that flies in the face of reality: plenty of people are accomplishing significant feats with Go, without generics. I originally asked what language features—or possibly characteristics of the problems Go is solving—accommodate the lack of generics. Why do the developers feel they aren’t useful enough to add? Defending Go has made me come up with concrete reasons, and I have answered my question for myself.
Very quickly you come to a point where you have to lie.
I don’t think so. Go is about making the simple decision, not the ultra nifty yet often unnecessarily complex one. Make the simple choice, and you don’t have to lie. That puts Go at the opposite end of the spectrum from languages like Haskell, a trait that a lot of people consider a feature.
Also, I should note that around 4 years ago I too thought Go was a sensible idea and I was using it for things but I noticed the problems and started using more capable tools.
This is really more of a philosophical point than a practical one.
No, code duplication is a very practical issue - it’s ridiculous to say otherwise.
.. reasoning about the exact performance characteristics of a for loop is easier.
Parametric polymorphism is not about for loops.
Why? Why not use a map?
Do you see the irony here? Making a false compromise in the name of performance but then sacrificing it by using a Map?
You seem to be claiming that the lack of generics is so crippling that Go is trash not worth using, by anybody.
Yes, I refuse to use a language which doesn’t allow code reuse and I’ll point out the silly false sacrifice made for “simplicity” reasons. I think more should have the same standards.
That’s how I read your tone anyway
I have no tone.
But that flies in the face of reality: plenty of people are accomplishing significant feats with Go, without generics.
People are accomplishing things despite the lack of generics, by not abstracting and not reusing code.
Why do the developers feel they aren’t useful enough to add? Defending Go has made me come up with concrete reasons, and I have answered my question for myself.
I have seen the answer, too. They put up with code duplication and don’t allow abstraction.
Go is about making the simple decision, not the ultra nifty yet often unnecessarily complex one.
Parametric polymorphism as a type system feature has been around for about 30 years now. It’s extremely simple to describe and implement.
Go’s “simplicity” (i.e. lack of parametric polymorphism) is a silly thing to trade for. Your application becomes more complicated because of no abstraction and no code reuse so that the language spec doesn’t spend a few paragraphs defining parametric types!
C++ templates are just fancy code generation. Well, fancy is a stretch. Built in though.
Having a type system that handles generics is a better solution for generics, certainly. But code generation isn’t horrific in of itself. It’s one of many tools in the world of programming tools, and is fairly easy to work with. Protobufs use code generation to make parsers and interfaces, and that’s fine in my opinion.
I can’t think of any nontrivial applications for which generics wouldn’t be useful. Besides code reuse, parametric polymorphism gives you better reasoning about what your code is doing via parametricity/free theorems.
Linux kernel. Written in C, it obviously doesn’t use parametric polymorphism though it approximates it in some places. However, most of the algorithms and data structures are tuned for their particular use patterns, and thus the code is not generic.
HTTP server. I can’t think of a reason to use generic anything. It’s mostly pushing bytes and data validation. The Go standard library includes a high performing concurrent HTTP server. Nginx and Apache are both written in C. Most file servers fit here, actually.
Message buses. Any algorithms written for a message bus are fairly specific, and a non-trivial bus will have it’s own versions of all of them.
Load balancers. TCP, UDP, HTTP, whatever you like, it’s a hash table sitting on a shitload of sockets. No polymorphism here beyond what Go provides.
Firewalls. A firewall has to be low latency, basically transparent. Custom up the wazoo, an algorithm in a firewall can branch on anything in those header for any number of reasons.
Databases. Skip lists seem like a great thing to make generic, until you start optimizing for more and more fields in the elements and it’s only works for DB records anyway. And more importantly, you want to know where every last byte is, all the time.
Now, I’m not saying that generics won’t be somewhat useful for a lot of these. Of course they would in some parts of code. But most of those use cases are covered with Go’s typing system. The amount of times you need to be able to specialize some code to literally any type ever is not that high. An interface will do just fine.
Yes, I refuse to use a language which doesn’t allow code reuse
They put up with code duplication and don’t allow abstraction.
Seriously?
Parametric polymorphism as a type system feature has been around for about 30 years now. It’s extremely simple to describe and implement.
Then why are C++ templates are such a mess? And why is it that Java generics are implemented with type erasure, don’t work with primitives, and aren’t safe with arrays?
You are right though, it’s even said on the Go site that some code duplication is the preferred solution to certain things. Many people find that a small amount of duplicate code is acceptable in certain circumstances, otherwise the industry standard would be Haskell. For an application that necessarily relies heavily on generic types, I wouldn’t pick Go.
Your application becomes more complicated because of no abstraction and no code reuse so that the language spec doesn’t spend a few paragraphs defining parametric types!
Exactly zero abstraction is ideal. Main is the only function anyone needs.
Then why are C++ templates are such a mess? And why is it that Java generics are implemented with type erasure, don’t work with primitives, and aren’t safe with arrays?
C++ templates are not a great idea
Type erasure is a brilliant idea
Java’s generics definitely have problems and could have been done much better, that’s the problem with trying to retrofit generics and making compromises!
Exactly zero abstraction is ideal. Main is the only function anyone needs.
I truly hope and believe this is not a position shared by many!
A big part of the reason why generics are such a mess is because of the intersection of subtyping and parametric polymorphism. IMO, get rid of subtyping and use row polymorphism for records.
Your examples only show that retrofitting or piggy-backing on features to get parametric polymorphism has always resulted in sadness. Implementing generics as a goal is extremely simple - I’ve done it many times!
Also, from what I’ve heard from developers on the Rust team, making everything work just right is actually extremely challenging for them. The common complaint I hear is that the type system touches everything, so working with it is difficult.
Nice, well written. Although everyone in my college Programming Languages class wrote a type annotator for a similarly trivial “language” during lab one day in about an hour. A real programming language is quite a bit harder.
He listed earlier many “real programming languages” with parametric polymorphism. It’s not exactly a colony on Mars. This is something we’ve known how to do for literally decades. And saying that languages shouldn’t have parametric polymorphism because you personally think it’s difficult to implement is like saying cars shouldn’t have transmissions or 4-stroke engines for the same reason.
Not being able to write generic data structures in Go without casts is awful and basically inexcusable for a modern language that wants to be taken seriously.
He listed earlier many “real programming languages” with parametric polymorphism.
Which one has a perfect system? The Go developers won’t introduce one because they can’t see a clean way to do it that jives with the rest of the language.
And saying that languages shouldn’t have parametric polymorphism because you personally think it’s difficult to implement is like saying cars shouldn’t have transmissions or 4-stroke engines for the same reason.
Because it’s difficult to get right, for a lot of people, including the very capable people on the Go team. Side note, Tesla is doing well.
Not being able to write generic data structures in Go without casts is awful and basically inexcusable for a modern language that wants to be taken seriously.
I mean, it kinda sucks. I’ve never been that bothered. A lot of companies are writing critical infrastructure in Go, and the word critical implies they are taking it seriously. Writing correct, fast, and maintainable Go code is easy, and a lot of people like that.
There are no casts in Go. You can convert primitive types (such as string to []byte or int to float64), but when working with interface{}, you can’t convert, you can only do type assertions. Type assertions will fail if the asserted type doesn’t match, so you still have type safety at runtime. You still can’t hammer a square piece through a round hole.
T must implement the (interface) type of x; otherwise the type assertion is invalid since it is not possible for x to store a value of type T. If T is an interface type, x.(T) asserts that the dynamic type of x implements the interface T.
If the type assertion holds, the value of the expression is the value stored in x and its type is T. If the type assertion is false, a run-time panic occurs. In other words, even though the dynamic type of x is known only at run time, the type of x.(T) is known to be T in a correct program.
(Emphasis mine.)
So I’m afraid you are mistaken: type assertions are not at all like a type cast – at least not like a type cast in a weakly typed language like C, wherein they instruct the compiler to treat x as being of type T regardless of what its actual type at runtime is.
Now type assertions in Go do allow you to compile code that is not always correct, including code that is never correct. Nevertheless, they do not allow you to run code that is not correct, and thus are safe, much unlike type casts in a language like C.
I was responding to your claim that a) there is no such thing as runtime type safety and b) type assertions in Go are therefore not safe.
Now in what way is your Java code example unsafe? Will it ever run incorrect code? Not as far as I can tell.
It is clear that Java’s casting does allow you to compile incorrect code, like Go’s type assertions do. I’ve already said so. (Though arguably the code in your example is not even ever incorrect.)
So if you were trying to make a point about my reply, I can’t see what that is.
Yes there is. Compare the result of running python -c '"1" + 1' and php -r '"1" + 1;'. The difference is a result of Python having more type safety than PHP, which is checked at runtime.
This is a form of safety, but it’s not static type safety. From Types and Programming Languages:
Refining this intuition a little, we could say that a safe language is one that
protects its own abstractions. Every high-level language provides abstractions
of machine services. Safety refers to the language’s ability to guarantee the
integrity of these abstractions and of higher-level abstractions introduced by
the programmer using the definitional facilities of the language. For example,
a language may provide arrays, with access and update operations, as an
abstraction of the underlying memory. A programmer using this language
then expects that an array can be changed only by using the update operation
on it explicitly—and not, for example, by writing past the end of some other
data structure. [1]
So Go will let you know at runtime if you’re casting between two incompatible types, and this is a form of safety, but not static type safety. Similarly, Python won’t let you index past the end of a list, and this is a form of safety, but, once again, not type safety.
Pierce continues:
Language safety is not the same thing as static type safety. Language safety
can be achieved by static checking, but also by run-time checks that trap
nonsensical operations just at the moment when they are attempted and stop
the program or raise an exception. For example, Scheme is a safe language,
even though it has no static type system. [2]
In the end, this is mainly an argument about definitions, but it’s important to get these definitions right when discussing language design tradeoffs, especially when these definitions clue us in to real differences between languages.
For what it’s worth, I think Go’s lack of parametric polymorphism is absolutely crippling.
[1] Pierce, Benjamin C. (2002-02-01). Types and Programming Languages (Page 28). MIT Press. Kindle Edition.
So Go will let you know at runtime if you’re casting between two incompatible types, and this is a form of safety, but not static type safety.
I don’t understand why you’re saying this. I’m not talking about static type safety. I’m talking about type safety at runtime. Puffnfresh has claimed that no such thing exists. I’ve provided a counter-example.
This is basically a conclusion drawn from the distinction made between strongly and weakly typed languages. It’s not like I’m pulling this out of thin air.
For what it’s worth, I think Go’s lack of parametric polymorphism is absolutely crippling.
Both PHP and Python only have a single type, so it’s kind nonsensical to claim that one has more type safety than the other at least using a TaPL definition of the word “type”. What they do have is different runtime semantics for handling values with different runtime tags, which colloquially these languages unfortunately refer to as “types”.
I cannot see any point you’re making other than to indulge in a definition war. If we were at ICFP and I started talking about the type safety of Haskell and Python as if they were the same thing, then I’d applaud your correction. But in a wider audience, it’s quite clear that comparing the type safety of languages like Python and PHP is a perfectly valid and natural thing to do.
It’s not a word game, my point is that I question whether this alternative definition of the term actually gives rise to a well-defined comparison or ordering at all. What could it possibly mean for a programming language to “go wrong less” than another programming language if they both admit an infinite number of invalid programs? Hypothetically If I have a language Blub can I always form a language Blub' that is less type safe than Blub and what would I have to alter to make it so? This kind of redefinition just leads to nonsense.
My example up-thread contrasting Python and PHP is absolutely not nonsense. The distinction between them is a result of a difference in each language’s type safety.
I mean, hell, if you try running the Python code, you’ll get an exception raised aptly named TypeError. What other evidence could possibly convince you?
My example up-thread contrasting Python and PHP is absolutely not nonsense.
It is a nonsensical notion if you take it to it’s logical conclusion, that there exists this alternative notion of “type safety” in terms of their runtime semantics that we can compare languages based on. Sure, you can contrast the addition function for a fixed set of arguments, but how do you generalize that to then make a universal claim like “Python has more type safety than PHP”.
If it’s a well-defined concept, then suppose I gave you [Python, Fortran, Visual Basic, Coq, PHP] what would be the decision procedure in the comparison “function” used to order these languages for your notion of type-safety?
It is a nonsensical notion if you take it to it’s logical conclusion
Why do I have to take it to its logical conclusion? The distinction exists today. Python is strongly typed and PHP is weakly typed. These descriptions are commonly used and describe each language’s type system.
Sure, you can contrast the addition function for a fixed set of arguments
The addition function? Really? That’s what you got out of my example?
Try this in a Python interpreter:
def x(): pass
x(0)
What do you get? A TypeError!
If it’s a well-defined concept
Who says it has to be well defined? I certainly didn’t. I’m merely drawing on the established conventions used to describe properties of programming languages.
I still truthfully don’t understand what your central point is. Are you merely trying to state that there exists some definition of type safety for which it is nonsensical to ascribe to dynamically typed languages like Python or PHP? Great. I never contested that. But that does mean you’re just playing word games, because it’s patently obvious that that definition isn’t being invoked when discussing type safety of precisely the languages that your definition of type safety doesn’t apply to.
OK. So you’re playing word games. Just as I thought.
Concepts such as strong and weak typing exist and they are used to draw meaningful comparisons between languages all the time. So you covering your ears and simply saying this doesn’t exist is a bit ludicrous.
They can’t and shouldn’t be used to draw conclusions about programming languages at all unless they have a precise meaning, which they don’t. The fact that they are used all the time to make specious arguments doesn’t make them any more accurate or precise, that’s just a consensus fallacy.
They can’t and shouldn’t be used to draw conclusions about programming languages
Except they are. I’ve given examples to support my claim. You’ve done nothing but appeal to your own definition.
I’ve committed no fallacy because I’ve drawn no conclusions from the fact that there is a consensus. I’ve merely pointed out that there exists a consensus. (Which you absurdly claimed doesn’t exist!)
I’ve merely pointed out that there exists a consensus.
No really, there isn’t a consensus on these terms. Even the Wikipedia article on the terms “weak and strong typing” prefixes everything it says by saying they have no precise meaning and that many of the proposed definitions are mutually contradictory, and should be avoided in favor of more precise terms. Probably this answer is the best explanation of why the terms are themselves completely meaningles, they’re just used for sophistic arguments to justify preconceived bias about language features.
Which is why I claim the defining this new term “runtime type safety” in terms of these other ill-defined terms is fallacious.
It boggles my mind that you think I’m trying to precisely define anything. I’m not. I’ve merely pointed to the facts: the terms weak typing and strong typinghave meaning, and are used to compare and contrast language features in a way that effectively communicates key differences. These differences relate to the way types are handled at runtime.
I never once said that there weren’t any problems with these terms or that they weren’t vague.
Welcome to the subtleties of human language. You’re arguing about what should be. I’m pointing out what is.
There are two kinds of type safety: Static types and strong types.
What you are describing is strong types; safety at run-time. Static types go beyond that; they give safety at compile-time.
Now, in Java, generics are implemented with static type guarantee. But there is a catch; the type parameters are erased after compilation. Also, Java doesn’t support co-variant types, which means, you end up type-casting / type-asserting sometimes, though it is not very often.
However, if a language doesn’t support generics at all, then you have to use type-casts / type-assertions every time you need to reuse code!
However, if a language doesn’t support generics at all, then you have to use type-casts / type-assertions every time you need to reuse code!
And that language isn’t Go.
I swear, this whole thing is messed up. Gophers tend to overstate the power of structural subtyping and people who haven’t written a lick of Go seem to dismiss it entirely. Believe it or not, Go actually does have a mechanism for compile time safe polymorphism. And yes, that means static types!
The quoted sentence of mine was deliberately brief. There are many ways for a language to facilitate code reuse. Structural sub-typing allows code reuse in a different way than generics. You can’t create type-safe and re-usable collections with structural sub-typing for example.
I never implied otherwise. I made the comment I did because there are others in this thread that are repeatedly stating inaccuracies about Go. Namely, that it has no mechanisms for code reuse. Given that context, it’s unclear exactly what you were implying.
And that brings me back to my point. People seem to think that just because Go doesn’t have their favorite blend of polymorphism that Go has none of it at all. Or at the very least, completely dismiss structural subtyping simply because it’s different from what you like.
As far as code reuse goes, structural subtyping is only one piece of the puzzle. Go also has type embedding and properly implemented first class functions. (Which sounds like a weird benefit to quote, but not every language gets lexical scoping exactly right. Go does.)
The thing that I find so nice about Go is the extensive standard library. Why is the lack of macros seen as a limiting issue though?
Macros can be extremely helpful, especially for certain classes of problems. The Rust Macro Guide is an interesting read, if you have the time. If you are thinking only of C macros, consider that there are hygenic macros which make everything happy and wonderful.
Honestly, the only thing that worries me about Go is the garbage collector. Not that it has a GC, but that its GC is so young. Java’s GC was garbage (heh) when it was first introduced. But it has the advantage of 10+ years of intense optimization. CMS is a great GC nowadays, and G1 looks very promising.
I’m positive that Go will eventually arrive at a great GC too…but I’m worried it is going to take the same 10+ year trajectory to get there. I think we will start to see more people complaining about Go GC issues once companies/products really start to push the language with high-performance apps.
Java, Python, ML, Smalltalk and Lisps have an object-graph memory model; while this has big advantages, it also makes their overall performance really sensitive to the garbage collector’s performance. Go’s memory model is from C, and it’s an object-embedding memory model, but without the object-slicing problem that C++ has from combining object embedding with subclass extension. This means the garbage collector is dramatically less important to Go’s performance.
Interesting, I didn’t know this. Do you have any other resources/reading to look at? I’d love to dig into the technical details about how these two different memory models work.
I don’t know of any good essays about this, although I’ve started a draft of one to post to kragen-tol. I don’t even know if someone has already invented terminology for it. Even “memory model” is a bad term, because it already means at least two other conflicting things (one from multiprocessor concurrency, and one from the MS-DOS days of segment:offset).
There’s a third common “memory model” other than object-graph and object-embedding, although I mostly don’t recommend it: parallel arrays, as traditionally used in Fortran but also found in APL, J, and K. Beyond that, there are a multitude of other possible ways of organizing memory, but most of them are less useful in some way.
Describing the Java or Python heap as an object graph is established terminology, but part of what I’m working from here is Allen Short’s insight in describing Python as an “object-graph language” the other day on IRC.
This was my big initial worry with Go, but I have reasons for optimism.
In just two years, Go has gone from having a conservative GC in 1.0 to having a fully precise GC in 1.3, with major improvements to parallelism along the way. The
unsafe
package means a generational/compacting/concurrent GC may be a ways off, but unlike many open-source languages, the Go team is fully capable of shipping a first-class GC.It’s also easier to write GC-friendly applications in Go, in large part due to the control over memory layout and allocation. (The fact that you can trivially write a benchmark and measure memory allocation for your code doesn’t hurt, either.)
Finally, most of the big-heap problems which cause grief for GC’d languages are best solved by managing use-specific memory pools off-heap. If you’re willing to pay the
cgo
tax and copy data, you can pretty easily leverage the existing ecosystem of highly-tuned in-memory data structures to store data off-heap while keeping your Go-managed heap nice and light.I’m curious is if this is actually an issue for two reasons:
I agree to an extent. The knowledge about Java’s GC has been well documented and transcribed. Future systems will have an easier time building robust GCs, but there is still a lot of work to be done.
For example, Go is only just now getting a precise GC, which is a mandatory pre-req for a higher performance generational collector. If Go wants to be a serious server language, it should really offer a pauseless collector too…which means it will also have to develop a concurrent, compacting, generational, precise GC like Azul.
Does this mean that ultimately, many of us will be bitter about how lousy Go is, and wish for something better? That’s a bummer, Go is really quite nice. But it’s awesome because eventually, there will be something so fabulous it makes Go look like garbage. :D
Also regarding Go and generics, I see the usefulness/complexity thing cited everywhere, but how does Go as a language help avoid needing generics? Slices, maps, and channels are generic, is that really all you need to do things?
I’ve already seen many things which make it look like that.
No, and so Go requires you to lie to the type system (i.e. cast) when you want to do things generically.
You can’t just throw Go under the bus saying it “looks like garbage” without naming the languages you feel are superior.
The above are much more capable of writing reusable functions. Some are again much more capable than the others, but the bar is pretty low!
Whining about a language is a waste of everyone’s time. If you have something useful to contribute, do so. Plenty of people have given detailed, reasoned, substantial accounts of their experiences with Go, good and bad. Maybe it wasn’t clear, but that sort of answer is the kind I was inviting.
Criticising things is an important thing to do. Being smug about something is not useful.
I’d argue that constructively criticizing things is an important thing to do. If criticism isn’t constructive, then you have little hope of the right people reconsidering what they think they know.
Your comments in this thread are certainly not constructive.
Taking this discussion meta, how would you classify your comments as not smug? I agree that criticism is important, but I don’t see how you have done anything but call Go garbage and list languages with generics of some variety.
Criticizing things is indeed an important thing to do, but you haven’t succeeded in doing it so far.
Go supports a different form of compile time safe polymorphism via structural sub-typing. No lying necessary.
I agree that Go supports a single, limited form of polymorphism. Very quickly you come to a point where you have to lie.
In what situations? When you are writing your own generic containers you run into a problem, but otherwise when you you actually need more than an interface? The built in containers are quite capable for most applications. If you truly need a specific data structure, how often are you going to need to use it on any possible type? A database may use a skip list for good concurrency working with tuples. A text editor may use a rope for working with strings. I would argue that using those specialized data structures generically in every situation is marginally useful.
The Go developers argue that generics are not adequately useful to be added, and I haven’t seen a compelling example that indicates otherwise, do you have one?
I want a Tree data structure, maybe I’ll only use it on 3 different types of values in my application but why:
The 3rd point might not seem like a problem but it has huge implications, documented in a concept called parametricity.
It’s not just data structures. Functions shouldn’t have to be unnecessarily specialised. If I need to write a function like so:
Why should we care about whether it’s an int or a string or a gobbledok? Our alternative is to use an interface:
But now we know nothing about the output type when we go to use it. We’d have to cast - will that work? Only after knowing exactly the definition of the function do we know for sure.
I hope I don’t have high standards by refusing to use a system which doesn’t allow using types as documentation or which asks for information it doesn’t need nor use.
I am familiar with all of the concepts you described. However, Go emphasizes simplicity. I haven’t ever heard someone claim that they encountered a serious problem with lack of total parametric polymorphism that so critically impeded their ability to write their application that they would trade away the simplicity of Go. Go was designed for a specific purpose, to write concurrent applications that are easy to reason about, both in terms of their concurrent aspects and performance as a whole.
This is really more of a philosophical point than a practical one. While such a feature is nifty, it’s not that useful. We can talk all day about how neat monads are (really neat), or how map is so more elegant than a for loop, but at the end of the day when you are writing an network server chewing through tens of thousands of requests per second, reasoning about the exact performance characteristics of a for loop is easier.
Why? Why not use a map? There are plenty of ways to use trees besides maps, ways that involve custom, non-generic logic to create and maintain the tree to reap some performance benefit or other desirable characteristic.
You have different priorities. You aren’t wrong for your priorities, and your priorities don’t make all other priority sets wrong. You seem to be claiming that the lack of generics is so crippling that Go is trash not worth using, by anybody. That’s how I read your tone anyway. But that flies in the face of reality: plenty of people are accomplishing significant feats with Go, without generics. I originally asked what language features—or possibly characteristics of the problems Go is solving—accommodate the lack of generics. Why do the developers feel they aren’t useful enough to add? Defending Go has made me come up with concrete reasons, and I have answered my question for myself.
I don’t think so. Go is about making the simple decision, not the ultra nifty yet often unnecessarily complex one. Make the simple choice, and you don’t have to lie. That puts Go at the opposite end of the spectrum from languages like Haskell, a trait that a lot of people consider a feature.
Also, I should note that around 4 years ago I too thought Go was a sensible idea and I was using it for things but I noticed the problems and started using more capable tools.
No, code duplication is a very practical issue - it’s ridiculous to say otherwise.
Parametric polymorphism is not about for loops.
Do you see the irony here? Making a false compromise in the name of performance but then sacrificing it by using a Map?
Yes, I refuse to use a language which doesn’t allow code reuse and I’ll point out the silly false sacrifice made for “simplicity” reasons. I think more should have the same standards.
I have no tone.
People are accomplishing things despite the lack of generics, by not abstracting and not reusing code.
I have seen the answer, too. They put up with code duplication and don’t allow abstraction.
Parametric polymorphism as a type system feature has been around for about 30 years now. It’s extremely simple to describe and implement.
Go’s “simplicity” (i.e. lack of parametric polymorphism) is a silly thing to trade for. Your application becomes more complicated because of no abstraction and no code reuse so that the language spec doesn’t spend a few paragraphs defining parametric types!
I just searched Twitter for #golang to see what other strange things were being said and I immediately saw this monster:
https://clipperhouse.github.io/gen/
The solution for lack of generics? A code generation tool!
https://github.com/clipperhouse/gen/blob/master/templates/projection/projection.go
This is simplicity?
https://github.com/joeshaw/gengen
C++ templates are just fancy code generation. Well, fancy is a stretch. Built in though.
Having a type system that handles generics is a better solution for generics, certainly. But code generation isn’t horrific in of itself. It’s one of many tools in the world of programming tools, and is fairly easy to work with. Protobufs use code generation to make parsers and interfaces, and that’s fine in my opinion.
I read: generics aren’t necessary, code generation rather than generics is fine.
Very strange ideas!
For certain kinds of applications, sure. Because for certain applications generics aren’t that important.
I can’t think of any nontrivial applications for which generics wouldn’t be useful. Besides code reuse, parametric polymorphism gives you better reasoning about what your code is doing via parametricity/free theorems.
Now, I’m not saying that generics won’t be somewhat useful for a lot of these. Of course they would in some parts of code. But most of those use cases are covered with Go’s typing system. The amount of times you need to be able to specialize some code to literally any type ever is not that high. An interface will do just fine.
Seriously?
Then why are C++ templates are such a mess? And why is it that Java generics are implemented with type erasure, don’t work with primitives, and aren’t safe with arrays?
You are right though, it’s even said on the Go site that some code duplication is the preferred solution to certain things. Many people find that a small amount of duplicate code is acceptable in certain circumstances, otherwise the industry standard would be Haskell. For an application that necessarily relies heavily on generic types, I wouldn’t pick Go.
Exactly zero abstraction is ideal. Main is the only function anyone needs.
I truly hope and believe this is not a position shared by many!
A big part of the reason why generics are such a mess is because of the intersection of subtyping and parametric polymorphism. IMO, get rid of subtyping and use row polymorphism for records.
The point is, generics are actually pretty hard and complicated, even for people who understand them.
CS 101 students around the world are pioneering this strategy!
Your examples only show that retrofitting or piggy-backing on features to get parametric polymorphism has always resulted in sadness. Implementing generics as a goal is extremely simple - I’ve done it many times!
Also, from what I’ve heard from developers on the Rust team, making everything work just right is actually extremely challenging for them. The common complaint I hear is that the type system touches everything, so working with it is difficult.
Link to repo?
http://brianmckenna.org/blog/type_annotation_cofree
Nice, well written. Although everyone in my college Programming Languages class wrote a type annotator for a similarly trivial “language” during lab one day in about an hour. A real programming language is quite a bit harder.
He listed earlier many “real programming languages” with parametric polymorphism. It’s not exactly a colony on Mars. This is something we’ve known how to do for literally decades. And saying that languages shouldn’t have parametric polymorphism because you personally think it’s difficult to implement is like saying cars shouldn’t have transmissions or 4-stroke engines for the same reason.
Not being able to write generic data structures in Go without casts is awful and basically inexcusable for a modern language that wants to be taken seriously.
Which one has a perfect system? The Go developers won’t introduce one because they can’t see a clean way to do it that jives with the rest of the language.
Because it’s difficult to get right, for a lot of people, including the very capable people on the Go team. Side note, Tesla is doing well.
I mean, it kinda sucks. I’ve never been that bothered. A lot of companies are writing critical infrastructure in Go, and the word critical implies they are taking it seriously. Writing correct, fast, and maintainable Go code is easy, and a lot of people like that.
There are no casts in Go. You can convert primitive types (such as string to []byte or int to float64), but when working with interface{}, you can’t convert, you can only do type assertions. Type assertions will fail if the asserted type doesn’t match, so you still have type safety at runtime. You still can’t hammer a square piece through a round hole.
There is no such thing as runtime type safety. Go’s “type assertions” are in every way a type cast.
From the specification:
(Emphasis mine.)
So I’m afraid you are mistaken: type assertions are not at all like a type cast – at least not like a type cast in a weakly typed language like C, wherein they instruct the compiler to treat x as being of type T regardless of what its actual type at runtime is.
Now type assertions in Go do allow you to compile code that is not always correct, including code that is never correct. Nevertheless, they do not allow you to run code that is not correct, and thus are safe, much unlike type casts in a language like C.
Just because you can write this in Java:
And it won’t crash, doesn’t mean you’re not casting.
I was responding to your claim that a) there is no such thing as runtime type safety and b) type assertions in Go are therefore not safe.
Now in what way is your Java code example unsafe? Will it ever run incorrect code? Not as far as I can tell.
It is clear that Java’s casting does allow you to compile incorrect code, like Go’s type assertions do. I’ve already said so. (Though arguably the code in your example is not even ever incorrect.)
So if you were trying to make a point about my reply, I can’t see what that is.
Yes there is. Compare the result of running
python -c '"1" + 1'
andphp -r '"1" + 1;'
. The difference is a result of Python having more type safety than PHP, which is checked at runtime.The difference has nothing to do with types.
This is a form of safety, but it’s not static type safety. From Types and Programming Languages:
So Go will let you know at runtime if you’re casting between two incompatible types, and this is a form of safety, but not static type safety. Similarly, Python won’t let you index past the end of a list, and this is a form of safety, but, once again, not type safety.
Pierce continues:
In the end, this is mainly an argument about definitions, but it’s important to get these definitions right when discussing language design tradeoffs, especially when these definitions clue us in to real differences between languages.
For what it’s worth, I think Go’s lack of parametric polymorphism is absolutely crippling.
[1] Pierce, Benjamin C. (2002-02-01). Types and Programming Languages (Page 28). MIT Press. Kindle Edition.
[2] Ibid.
I don’t understand why you’re saying this. I’m not talking about static type safety. I’m talking about type safety at runtime. Puffnfresh has claimed that no such thing exists. I’ve provided a counter-example.
This is basically a conclusion drawn from the distinction made between strongly and weakly typed languages. It’s not like I’m pulling this out of thin air.
For what it’s worth, I disagree.
Both PHP and Python only have a single type, so it’s kind nonsensical to claim that one has more type safety than the other at least using a TaPL definition of the word “type”. What they do have is different runtime semantics for handling values with different runtime tags, which colloquially these languages unfortunately refer to as “types”.
I cannot see any point you’re making other than to indulge in a definition war. If we were at ICFP and I started talking about the type safety of Haskell and Python as if they were the same thing, then I’d applaud your correction. But in a wider audience, it’s quite clear that comparing the type safety of languages like Python and PHP is a perfectly valid and natural thing to do.
It’s not a word game, my point is that I question whether this alternative definition of the term actually gives rise to a well-defined comparison or ordering at all. What could it possibly mean for a programming language to “go wrong less” than another programming language if they both admit an infinite number of invalid programs? Hypothetically If I have a language Blub can I always form a language Blub' that is less type safe than Blub and what would I have to alter to make it so? This kind of redefinition just leads to nonsense.
My example up-thread contrasting Python and PHP is absolutely not nonsense. The distinction between them is a result of a difference in each language’s type safety.
I mean, hell, if you try running the Python code, you’ll get an exception raised aptly named TypeError. What other evidence could possibly convince you?
It is a nonsensical notion if you take it to it’s logical conclusion, that there exists this alternative notion of “type safety” in terms of their runtime semantics that we can compare languages based on. Sure, you can contrast the addition function for a fixed set of arguments, but how do you generalize that to then make a universal claim like “Python has more type safety than PHP”.
If it’s a well-defined concept, then suppose I gave you [Python, Fortran, Visual Basic, Coq, PHP] what would be the decision procedure in the comparison “function” used to order these languages for your notion of type-safety?
Why do I have to take it to its logical conclusion? The distinction exists today. Python is strongly typed and PHP is weakly typed. These descriptions are commonly used and describe each language’s type system.
The addition function? Really? That’s what you got out of my example?
Try this in a Python interpreter:
What do you get? A TypeError!
Who says it has to be well defined? I certainly didn’t. I’m merely drawing on the established conventions used to describe properties of programming languages.
I still truthfully don’t understand what your central point is. Are you merely trying to state that there exists some definition of type safety for which it is nonsensical to ascribe to dynamically typed languages like Python or PHP? Great. I never contested that. But that does mean you’re just playing word games, because it’s patently obvious that that definition isn’t being invoked when discussing type safety of precisely the languages that your definition of type safety doesn’t apply to.
There is no such thing as strong or weak typing for the same reason there is no such thing as runtime type safety. They’re not well-defined.
OK. So you’re playing word games. Just as I thought.
Concepts such as strong and weak typing exist and they are used to draw meaningful comparisons between languages all the time. So you covering your ears and simply saying this doesn’t exist is a bit ludicrous.
They can’t and shouldn’t be used to draw conclusions about programming languages at all unless they have a precise meaning, which they don’t. The fact that they are used all the time to make specious arguments doesn’t make them any more accurate or precise, that’s just a consensus fallacy.
Except they are. I’ve given examples to support my claim. You’ve done nothing but appeal to your own definition.
I’ve committed no fallacy because I’ve drawn no conclusions from the fact that there is a consensus. I’ve merely pointed out that there exists a consensus. (Which you absurdly claimed doesn’t exist!)
No really, there isn’t a consensus on these terms. Even the Wikipedia article on the terms “weak and strong typing” prefixes everything it says by saying they have no precise meaning and that many of the proposed definitions are mutually contradictory, and should be avoided in favor of more precise terms. Probably this answer is the best explanation of why the terms are themselves completely meaningles, they’re just used for sophistic arguments to justify preconceived bias about language features.
Which is why I claim the defining this new term “runtime type safety” in terms of these other ill-defined terms is fallacious.
It boggles my mind that you think I’m trying to precisely define anything. I’m not. I’ve merely pointed to the facts: the terms weak typing and strong typing have meaning, and are used to compare and contrast language features in a way that effectively communicates key differences. These differences relate to the way types are handled at runtime.
I never once said that there weren’t any problems with these terms or that they weren’t vague.
Welcome to the subtleties of human language. You’re arguing about what should be. I’m pointing out what is.
If only Hume were here, he’d be pointing and going “This!”
There are two kinds of type safety: Static types and strong types.
What you are describing is strong types; safety at run-time. Static types go beyond that; they give safety at compile-time.
Now, in Java, generics are implemented with static type guarantee. But there is a catch; the type parameters are erased after compilation. Also, Java doesn’t support co-variant types, which means, you end up type-casting / type-asserting sometimes, though it is not very often.
However, if a language doesn’t support generics at all, then you have to use type-casts / type-assertions every time you need to reuse code!
And that language isn’t Go.
I swear, this whole thing is messed up. Gophers tend to overstate the power of structural subtyping and people who haven’t written a lick of Go seem to dismiss it entirely. Believe it or not, Go actually does have a mechanism for compile time safe polymorphism. And yes, that means static types!
The quoted sentence of mine was deliberately brief. There are many ways for a language to facilitate code reuse. Structural sub-typing allows code reuse in a different way than generics. You can’t create type-safe and re-usable collections with structural sub-typing for example.
I never implied otherwise. I made the comment I did because there are others in this thread that are repeatedly stating inaccuracies about Go. Namely, that it has no mechanisms for code reuse. Given that context, it’s unclear exactly what you were implying.
And that brings me back to my point. People seem to think that just because Go doesn’t have their favorite blend of polymorphism that Go has none of it at all. Or at the very least, completely dismiss structural subtyping simply because it’s different from what you like.
As far as code reuse goes, structural subtyping is only one piece of the puzzle. Go also has type embedding and properly implemented first class functions. (Which sounds like a weird benefit to quote, but not every language gets lexical scoping exactly right. Go does.)