How do Go developers deal with this in the real world? It’s understandable not being able to support different versions of the same library in a single project, but not letting one specify which version should be used sounds like a big oversight.
In the current production system I work on we require semantic versions. Dependencies are used across multiple systems and one makes an explicit decision to consume a new version. This means builds are always reproducible. And one always knows what’s going into their production build. And one can make backwards breaking changes and give other projects time to consume them.
They have extensive experience in using Go in production (AFAIK since H1 2012, maybe even earlier), so I would say the stuff they came up with is sound.
My approach was to create a Makefile to build my project. It would:
go get any dependencies
git checkout to the dependencies tag/SHA that I indicated in the Makefile (since everyone used github, this wasn’t a problem)
go install
go build my project
This isn’t hard for the 5-6 dependencies I had, and I have no concerns about it scaling - at least, so long as I didn’t have some git-based repos, some Mercurial, a handful of tarballs… but that doesn’t seem to happen in the Go world, so I didn’t worry.
The biggest thing about Go & dependencies that worried me was that the quasi-official stance was: “you should always check out to master, and the library developer should always ensure master builds & never breaks backwards compatibility”. Do I trust the core language devs to get that right? Sure. Do I trust most library developers? Not a chance.
That’s why I do think that a batteries-included approach would be better.
Yeah, that one really was a head scratcher. I think opinions on that have shifted since I last used Go, but I’m not sure.
ETA: Thinking back I vaguely remember two differing reasons on why that approach was taken. The first was “does it really matter, there’s so much churn right now that libraries come and go” which was understandable… packages were being born & dying before they really considered making a major release. The second was “the core language does it this way so we should too” which is a noble sentiment but likely unrealistic for libraries.
No. The beauty of using URIs for imports is that you can impose whatever scheme you want. There are several such services. One popular one is gopkg.in, which for example, lets you tie package URLs to specific tagged versions in your repository.
It quickly becomes painful to encode a lot of meta data in a URI. Package managers generally evolve to include various constraints, like checksums. But restricting to just the major means builds are still not reproducible, as I commented above to @moses.
I think it’s also an odd choice to allow URLs in source files. That means making a project local involves making code changes or doing extra work elsewhere to make it clear that the URLs in source are now meaningless. In general, I believe package information in source is a failed experiment, having had to deal with it quite a bit in Erlang.
But restricting to just the major means builds are still not reproducible
You misread my comment. I gave you an example. Restricting to major version is a feature of that service. It is not a requirement of the build tool. There are other services out there that let you put a sha or a tag name in the URI.
I think it’s also an odd choice to allow URLs in source files.
It’s one of the many things I love about Go.
In general, I believe package information in source is a failed experiment
That experiment is flourishing in the Go community.
Can you depend on a specific SHA? In that case, your biggest problem is if the entire project is deleted, or someone erases the main-line version of history and replaces it with another, and nobody has ever forked the project ever. That seems reasonably safe. You could even signal to people which SHAs are considered “stable”. You could also build semantic versioning on top of git shas, as project meta-data in the git master tip. Seems a little hacky, but definitely workable.
I’d be nervous about tags because they’re much less indelible than SHAs. It’s easy to just make a tag point to another SHA, whereas it’s a hassle to remove a SHA. Having a local repo makes it better, but I think that it’s easy to lose a local tag if the remote has deleted the tag.
The article also mentioned using tags, although he sounded not that jazzed about them.
Unfortunately SHA’s have a horrible user interface. There is zero semantic information and that doesn’t scale very well. Might as well use git submodules in that case. Given that we mirror things locally and groups own specific repos, and we are moving towards signed tags (you know who redirected the tag), it’s a pleasant solution. It’s important to remember: your fellow developers might be dumb but they are not malicious. And if you’re on distributed version control, a backup exists on eery developers machine.
The OPs linked to the tag solution is actually rather poor. It only allows specifying the major version, which means builds are still not reproducible.
I agree with the authors point here:
Remember: everything about third-party code is a decision about trust. Waving the wand of “versioned dependencies” doesn’t change that. In fact, it might fool us into seeing guarantees that don’t exist.
The problem with the current state of affairs is it doesn’t work very well even with first party repos.
It’s actually pretty simple to do in practice. I clone dependencies to our own archive, then everyone works from this “known good” set of library versions. When someone wants to update a library they can update and test it before committing back to our archive. It’s incredibly straightforward really.
If you need multiple versions accessible at the same time. For example service A uses version v1 and service B uses version v2. Each service having their own complete copy of deps is possible but rather frustrating given that source control tools already provide a mechanism to have multiple versions accessible.
If you want reproducible builds. This can be a requirement from an external entity. But it’s also very handy for debugging. The current system makes git bisect difficult to use to track down errors.
Knowing which version of libraries are in your release. If you always get whatever ‘master’ is, then between reviewing code and building a release you can get new commits, which is very confusing.
Can you elaborate on your second bullet point? Normally I’d see reproducible builds being an argument for copying dependencies into your own source control, since depending on some external entity would take that decision out of your hands.
I don’t understand your third bullet point - it seems like if dependencies are checked into your source control, then you don’t have to “always get whatever master is” - you can just use the dependency at the version it is on your branch.
I’m assuming that you are suggesting that one take every dependency had put it into a single monolithic repository.
That works for some organization, however in my experience it is antithetical to SOA.
With multiple services, often there is a foundation of common components and often these components are in their own repos. Sometimes these components have backwards breaking changes in versions and moving the entire organization in lockstep is costly. And if the foundation components are in other repos, reproducible builds are not possible unless you can specify the version to use.
One could put the foundation components into their services source tree, but that does not scale well, IME. It means code is duplicated all over the place and makes it harder to upgrade code when necessary. And version control tools already support the idea of versioning code so it seems unnecessary.
I disagree with this article. I think it does hit on a lot of what makes a language a good teaching language, simplicity, tooling, and concurrency support are all important components of a teaching language, but I think the inflexibility of Go is a little unfortunate for a teaching language.
I’m going to hedge than unopinionated languages are slightly better suited for computer science education since they allow a class to explore each different bias without suffering from the language’s opinions in the process. That’s part of what makes SICP/Scheme so great! Everthing is equally simple/painful and you end up with a fairly unbiased view of what each paradigm is like.
Of course in a “real world setting” some biases can be incredibly helpful by enforcing consistency throughout the ecosystem.
The type-theorist in me can’t help but say, Go’s static type system is not what you should introduce as a canonical view of what static typing is like, SML would be far better for that. It’s far too easy to chafe against Go’s type system’s inherent restrictions while trying to accomplish mundane tasks.
Type theory is important, but it’s not the be-all, end-all topic of computer science education. Arguably—and I argue that—a programming language with a restrictive type system is in fact better for teaching a lot of computer science fundamentals.
Of course not, but types are an important part of computer science and at some point everyone should be given a brief overview of type systems. This shouldn’t be done in Go.
As far as being better for learning, at a certain point yes. No one would say to learn programming with Agda! :) But you also don’t want such a restrictive system where it stops you from writing many common things safely. I’d rather spend a few hours explaining parametric polymorphism then explain why segfaults or casting errors are happening.
Go programs that don’t use the unsafe package can’t segfault. (EDIT: To be more correct, Go prevents all segfaults that are caused by memory access violations. Go will let you dereference a null pointer, but the result is a helpful stacktrace.)
There are plenty of opportunities to use polymorphism in Go without resorting to runtime casts.
I’d call that a null pointer dereference. The Go runtime probably has its own handlers to recover from the actual segfault and present a nice stacktrace to the user.
I generally think of segfaults as memory access violations. e.g., Reading or writing memory that your program doesn’t have access to. Go will not let you do that by virtue of its type system (sans unsafe or cgo).
I’m sure reasonable people will disagree. But the key point to take home is that Go has strictly better memory safety than C.
Does this mean that ultimately, many of us will be bitter about how lousy Go is, and wish for something better? That’s a bummer, Go is really quite nice. But it’s awesome because eventually, there will be something so fabulous it makes Go look like garbage. :D
Also regarding Go and generics, I see the usefulness/complexity thing cited everywhere, but how does Go as a language help avoid needing generics? Slices, maps, and channels are generic, is that really all you need to do things?
Whining about a language is a waste of everyone’s time. If you have something useful to contribute, do so. Plenty of people have given detailed, reasoned, substantial accounts of their experiences with Go, good and bad. Maybe it wasn’t clear, but that sort of answer is the kind I was inviting.
I’d argue that constructively criticizing things is an important thing to do. If criticism isn’t constructive, then you have little hope of the right people reconsidering what they think they know.
Your comments in this thread are certainly not constructive.
Taking this discussion meta, how would you classify your comments as not smug? I agree that criticism is important, but I don’t see how you have done anything but call Go garbage and list languages with generics of some variety.
In what situations? When you are writing your own generic containers you run into a problem, but otherwise when you you actually need more than an interface? The built in containers are quite capable for most applications. If you truly need a specific data structure, how often are you going to need to use it on any possible type? A database may use a skip list for good concurrency working with tuples. A text editor may use a rope for working with strings. I would argue that using those specialized data structures generically in every situation is marginally useful.
The Go developers argue that generics are not adequately useful to be added, and I haven’t seen a compelling example that indicates otherwise, do you have one?
The built in containers are quite capable for most applications. If you truly need a specific data structure, how often are you going to need to use it on any possible type?
I want a Tree data structure, maybe I’ll only use it on 3 different types of values in my application but why:
Should I have to write it myself rather than rely on a library?
Should I have to either write it 3 different times or give up type safety by casting?
Should I know about what’s in the Tree when I don’t need to?
The 3rd point might not seem like a problem but it has huge implications, documented in a concept called parametricity.
I would argue that using those specialized data structures generically in every situation is marginally useful.
It’s not just data structures. Functions shouldn’t have to be unnecessarily specialised. If I need to write a function like so:
func idInt(a int) int {
return a
}
Why should we care about whether it’s an int or a string or a gobbledok? Our alternative is to use an interface:
func id(a interface {}) interface {} {
return a
}
But now we know nothing about the output type when we go to use it. We’d have to cast - will that work? Only after knowing exactly the definition of the function do we know for sure.
I hope I don’t have high standards by refusing to use a system which doesn’t allow using types as documentation or which asks for information it doesn’t need nor use.
I am familiar with all of the concepts you described. However, Go emphasizes simplicity. I haven’t ever heard someone claim that they encountered a serious problem with lack of total parametric polymorphism that so critically impeded their ability to write their application that they would trade away the simplicity of Go. Go was designed for a specific purpose, to write concurrent applications that are easy to reason about, both in terms of their concurrent aspects and performance as a whole.
It’s not just data structures. Functions shouldn’t have to be unnecessarily specialised.
This is really more of a philosophical point than a practical one. While such a feature is nifty, it’s not that useful. We can talk all day about how neat monads are (really neat), or how map is so more elegant than a for loop, but at the end of the day when you are writing an network server chewing through tens of thousands of requests per second, reasoning about the exact performance characteristics of a for loop is easier.
I want a Tree data structure
Why? Why not use a map? There are plenty of ways to use trees besides maps, ways that involve custom, non-generic logic to create and maintain the tree to reap some performance benefit or other desirable characteristic.
I hope I don’t have high standards by refusing to use a system which doesn’t allow using types as documentation or which asks for information it doesn’t need nor use.
You have different priorities. You aren’t wrong for your priorities, and your priorities don’t make all other priority sets wrong. You seem to be claiming that the lack of generics is so crippling that Go is trash not worth using, by anybody. That’s how I read your tone anyway. But that flies in the face of reality: plenty of people are accomplishing significant feats with Go, without generics. I originally asked what language features—or possibly characteristics of the problems Go is solving—accommodate the lack of generics. Why do the developers feel they aren’t useful enough to add? Defending Go has made me come up with concrete reasons, and I have answered my question for myself.
Very quickly you come to a point where you have to lie.
I don’t think so. Go is about making the simple decision, not the ultra nifty yet often unnecessarily complex one. Make the simple choice, and you don’t have to lie. That puts Go at the opposite end of the spectrum from languages like Haskell, a trait that a lot of people consider a feature.
Also, I should note that around 4 years ago I too thought Go was a sensible idea and I was using it for things but I noticed the problems and started using more capable tools.
This is really more of a philosophical point than a practical one.
No, code duplication is a very practical issue - it’s ridiculous to say otherwise.
.. reasoning about the exact performance characteristics of a for loop is easier.
Parametric polymorphism is not about for loops.
Why? Why not use a map?
Do you see the irony here? Making a false compromise in the name of performance but then sacrificing it by using a Map?
You seem to be claiming that the lack of generics is so crippling that Go is trash not worth using, by anybody.
Yes, I refuse to use a language which doesn’t allow code reuse and I’ll point out the silly false sacrifice made for “simplicity” reasons. I think more should have the same standards.
That’s how I read your tone anyway
I have no tone.
But that flies in the face of reality: plenty of people are accomplishing significant feats with Go, without generics.
People are accomplishing things despite the lack of generics, by not abstracting and not reusing code.
Why do the developers feel they aren’t useful enough to add? Defending Go has made me come up with concrete reasons, and I have answered my question for myself.
I have seen the answer, too. They put up with code duplication and don’t allow abstraction.
Go is about making the simple decision, not the ultra nifty yet often unnecessarily complex one.
Parametric polymorphism as a type system feature has been around for about 30 years now. It’s extremely simple to describe and implement.
Go’s “simplicity” (i.e. lack of parametric polymorphism) is a silly thing to trade for. Your application becomes more complicated because of no abstraction and no code reuse so that the language spec doesn’t spend a few paragraphs defining parametric types!
C++ templates are just fancy code generation. Well, fancy is a stretch. Built in though.
Having a type system that handles generics is a better solution for generics, certainly. But code generation isn’t horrific in of itself. It’s one of many tools in the world of programming tools, and is fairly easy to work with. Protobufs use code generation to make parsers and interfaces, and that’s fine in my opinion.
I can’t think of any nontrivial applications for which generics wouldn’t be useful. Besides code reuse, parametric polymorphism gives you better reasoning about what your code is doing via parametricity/free theorems.
Linux kernel. Written in C, it obviously doesn’t use parametric polymorphism though it approximates it in some places. However, most of the algorithms and data structures are tuned for their particular use patterns, and thus the code is not generic.
HTTP server. I can’t think of a reason to use generic anything. It’s mostly pushing bytes and data validation. The Go standard library includes a high performing concurrent HTTP server. Nginx and Apache are both written in C. Most file servers fit here, actually.
Message buses. Any algorithms written for a message bus are fairly specific, and a non-trivial bus will have it’s own versions of all of them.
Load balancers. TCP, UDP, HTTP, whatever you like, it’s a hash table sitting on a shitload of sockets. No polymorphism here beyond what Go provides.
Firewalls. A firewall has to be low latency, basically transparent. Custom up the wazoo, an algorithm in a firewall can branch on anything in those header for any number of reasons.
Databases. Skip lists seem like a great thing to make generic, until you start optimizing for more and more fields in the elements and it’s only works for DB records anyway. And more importantly, you want to know where every last byte is, all the time.
Now, I’m not saying that generics won’t be somewhat useful for a lot of these. Of course they would in some parts of code. But most of those use cases are covered with Go’s typing system. The amount of times you need to be able to specialize some code to literally any type ever is not that high. An interface will do just fine.
Yes, I refuse to use a language which doesn’t allow code reuse
They put up with code duplication and don’t allow abstraction.
Seriously?
Parametric polymorphism as a type system feature has been around for about 30 years now. It’s extremely simple to describe and implement.
Then why are C++ templates are such a mess? And why is it that Java generics are implemented with type erasure, don’t work with primitives, and aren’t safe with arrays?
You are right though, it’s even said on the Go site that some code duplication is the preferred solution to certain things. Many people find that a small amount of duplicate code is acceptable in certain circumstances, otherwise the industry standard would be Haskell. For an application that necessarily relies heavily on generic types, I wouldn’t pick Go.
Your application becomes more complicated because of no abstraction and no code reuse so that the language spec doesn’t spend a few paragraphs defining parametric types!
Exactly zero abstraction is ideal. Main is the only function anyone needs.
Then why are C++ templates are such a mess? And why is it that Java generics are implemented with type erasure, don’t work with primitives, and aren’t safe with arrays?
C++ templates are not a great idea
Type erasure is a brilliant idea
Java’s generics definitely have problems and could have been done much better, that’s the problem with trying to retrofit generics and making compromises!
Exactly zero abstraction is ideal. Main is the only function anyone needs.
I truly hope and believe this is not a position shared by many!
A big part of the reason why generics are such a mess is because of the intersection of subtyping and parametric polymorphism. IMO, get rid of subtyping and use row polymorphism for records.
Your examples only show that retrofitting or piggy-backing on features to get parametric polymorphism has always resulted in sadness. Implementing generics as a goal is extremely simple - I’ve done it many times!
Also, from what I’ve heard from developers on the Rust team, making everything work just right is actually extremely challenging for them. The common complaint I hear is that the type system touches everything, so working with it is difficult.
Nice, well written. Although everyone in my college Programming Languages class wrote a type annotator for a similarly trivial “language” during lab one day in about an hour. A real programming language is quite a bit harder.
He listed earlier many “real programming languages” with parametric polymorphism. It’s not exactly a colony on Mars. This is something we’ve known how to do for literally decades. And saying that languages shouldn’t have parametric polymorphism because you personally think it’s difficult to implement is like saying cars shouldn’t have transmissions or 4-stroke engines for the same reason.
Not being able to write generic data structures in Go without casts is awful and basically inexcusable for a modern language that wants to be taken seriously.
He listed earlier many “real programming languages” with parametric polymorphism.
Which one has a perfect system? The Go developers won’t introduce one because they can’t see a clean way to do it that jives with the rest of the language.
And saying that languages shouldn’t have parametric polymorphism because you personally think it’s difficult to implement is like saying cars shouldn’t have transmissions or 4-stroke engines for the same reason.
Because it’s difficult to get right, for a lot of people, including the very capable people on the Go team. Side note, Tesla is doing well.
Not being able to write generic data structures in Go without casts is awful and basically inexcusable for a modern language that wants to be taken seriously.
I mean, it kinda sucks. I’ve never been that bothered. A lot of companies are writing critical infrastructure in Go, and the word critical implies they are taking it seriously. Writing correct, fast, and maintainable Go code is easy, and a lot of people like that.
There are no casts in Go. You can convert primitive types (such as string to []byte or int to float64), but when working with interface{}, you can’t convert, you can only do type assertions. Type assertions will fail if the asserted type doesn’t match, so you still have type safety at runtime. You still can’t hammer a square piece through a round hole.
T must implement the (interface) type of x; otherwise the type assertion is invalid since it is not possible for x to store a value of type T. If T is an interface type, x.(T) asserts that the dynamic type of x implements the interface T.
If the type assertion holds, the value of the expression is the value stored in x and its type is T. If the type assertion is false, a run-time panic occurs. In other words, even though the dynamic type of x is known only at run time, the type of x.(T) is known to be T in a correct program.
(Emphasis mine.)
So I’m afraid you are mistaken: type assertions are not at all like a type cast – at least not like a type cast in a weakly typed language like C, wherein they instruct the compiler to treat x as being of type T regardless of what its actual type at runtime is.
Now type assertions in Go do allow you to compile code that is not always correct, including code that is never correct. Nevertheless, they do not allow you to run code that is not correct, and thus are safe, much unlike type casts in a language like C.
I was responding to your claim that a) there is no such thing as runtime type safety and b) type assertions in Go are therefore not safe.
Now in what way is your Java code example unsafe? Will it ever run incorrect code? Not as far as I can tell.
It is clear that Java’s casting does allow you to compile incorrect code, like Go’s type assertions do. I’ve already said so. (Though arguably the code in your example is not even ever incorrect.)
So if you were trying to make a point about my reply, I can’t see what that is.
Yes there is. Compare the result of running python -c '"1" + 1' and php -r '"1" + 1;'. The difference is a result of Python having more type safety than PHP, which is checked at runtime.
This is a form of safety, but it’s not static type safety. From Types and Programming Languages:
Refining this intuition a little, we could say that a safe language is one that
protects its own abstractions. Every high-level language provides abstractions
of machine services. Safety refers to the language’s ability to guarantee the
integrity of these abstractions and of higher-level abstractions introduced by
the programmer using the definitional facilities of the language. For example,
a language may provide arrays, with access and update operations, as an
abstraction of the underlying memory. A programmer using this language
then expects that an array can be changed only by using the update operation
on it explicitly—and not, for example, by writing past the end of some other
data structure. [1]
So Go will let you know at runtime if you’re casting between two incompatible types, and this is a form of safety, but not static type safety. Similarly, Python won’t let you index past the end of a list, and this is a form of safety, but, once again, not type safety.
Pierce continues:
Language safety is not the same thing as static type safety. Language safety
can be achieved by static checking, but also by run-time checks that trap
nonsensical operations just at the moment when they are attempted and stop
the program or raise an exception. For example, Scheme is a safe language,
even though it has no static type system. [2]
In the end, this is mainly an argument about definitions, but it’s important to get these definitions right when discussing language design tradeoffs, especially when these definitions clue us in to real differences between languages.
For what it’s worth, I think Go’s lack of parametric polymorphism is absolutely crippling.
[1] Pierce, Benjamin C. (2002-02-01). Types and Programming Languages (Page 28). MIT Press. Kindle Edition.
So Go will let you know at runtime if you’re casting between two incompatible types, and this is a form of safety, but not static type safety.
I don’t understand why you’re saying this. I’m not talking about static type safety. I’m talking about type safety at runtime. Puffnfresh has claimed that no such thing exists. I’ve provided a counter-example.
This is basically a conclusion drawn from the distinction made between strongly and weakly typed languages. It’s not like I’m pulling this out of thin air.
For what it’s worth, I think Go’s lack of parametric polymorphism is absolutely crippling.
Both PHP and Python only have a single type, so it’s kind nonsensical to claim that one has more type safety than the other at least using a TaPL definition of the word “type”. What they do have is different runtime semantics for handling values with different runtime tags, which colloquially these languages unfortunately refer to as “types”.
I cannot see any point you’re making other than to indulge in a definition war. If we were at ICFP and I started talking about the type safety of Haskell and Python as if they were the same thing, then I’d applaud your correction. But in a wider audience, it’s quite clear that comparing the type safety of languages like Python and PHP is a perfectly valid and natural thing to do.
It’s not a word game, my point is that I question whether this alternative definition of the term actually gives rise to a well-defined comparison or ordering at all. What could it possibly mean for a programming language to “go wrong less” than another programming language if they both admit an infinite number of invalid programs? Hypothetically If I have a language Blub can I always form a language Blub' that is less type safe than Blub and what would I have to alter to make it so? This kind of redefinition just leads to nonsense.
My example up-thread contrasting Python and PHP is absolutely not nonsense. The distinction between them is a result of a difference in each language’s type safety.
I mean, hell, if you try running the Python code, you’ll get an exception raised aptly named TypeError. What other evidence could possibly convince you?
My example up-thread contrasting Python and PHP is absolutely not nonsense.
It is a nonsensical notion if you take it to it’s logical conclusion, that there exists this alternative notion of “type safety” in terms of their runtime semantics that we can compare languages based on. Sure, you can contrast the addition function for a fixed set of arguments, but how do you generalize that to then make a universal claim like “Python has more type safety than PHP”.
If it’s a well-defined concept, then suppose I gave you [Python, Fortran, Visual Basic, Coq, PHP] what would be the decision procedure in the comparison “function” used to order these languages for your notion of type-safety?
It is a nonsensical notion if you take it to it’s logical conclusion
Why do I have to take it to its logical conclusion? The distinction exists today. Python is strongly typed and PHP is weakly typed. These descriptions are commonly used and describe each language’s type system.
Sure, you can contrast the addition function for a fixed set of arguments
The addition function? Really? That’s what you got out of my example?
Try this in a Python interpreter:
def x(): pass
x(0)
What do you get? A TypeError!
If it’s a well-defined concept
Who says it has to be well defined? I certainly didn’t. I’m merely drawing on the established conventions used to describe properties of programming languages.
I still truthfully don’t understand what your central point is. Are you merely trying to state that there exists some definition of type safety for which it is nonsensical to ascribe to dynamically typed languages like Python or PHP? Great. I never contested that. But that does mean you’re just playing word games, because it’s patently obvious that that definition isn’t being invoked when discussing type safety of precisely the languages that your definition of type safety doesn’t apply to.
OK. So you’re playing word games. Just as I thought.
Concepts such as strong and weak typing exist and they are used to draw meaningful comparisons between languages all the time. So you covering your ears and simply saying this doesn’t exist is a bit ludicrous.
They can’t and shouldn’t be used to draw conclusions about programming languages at all unless they have a precise meaning, which they don’t. The fact that they are used all the time to make specious arguments doesn’t make them any more accurate or precise, that’s just a consensus fallacy.
They can’t and shouldn’t be used to draw conclusions about programming languages
Except they are. I’ve given examples to support my claim. You’ve done nothing but appeal to your own definition.
I’ve committed no fallacy because I’ve drawn no conclusions from the fact that there is a consensus. I’ve merely pointed out that there exists a consensus. (Which you absurdly claimed doesn’t exist!)
I’ve merely pointed out that there exists a consensus.
No really, there isn’t a consensus on these terms. Even the Wikipedia article on the terms “weak and strong typing” prefixes everything it says by saying they have no precise meaning and that many of the proposed definitions are mutually contradictory, and should be avoided in favor of more precise terms. Probably this answer is the best explanation of why the terms are themselves completely meaningles, they’re just used for sophistic arguments to justify preconceived bias about language features.
Which is why I claim the defining this new term “runtime type safety” in terms of these other ill-defined terms is fallacious.
It boggles my mind that you think I’m trying to precisely define anything. I’m not. I’ve merely pointed to the facts: the terms weak typing and strong typinghave meaning, and are used to compare and contrast language features in a way that effectively communicates key differences. These differences relate to the way types are handled at runtime.
I never once said that there weren’t any problems with these terms or that they weren’t vague.
Welcome to the subtleties of human language. You’re arguing about what should be. I’m pointing out what is.
There are two kinds of type safety: Static types and strong types.
What you are describing is strong types; safety at run-time. Static types go beyond that; they give safety at compile-time.
Now, in Java, generics are implemented with static type guarantee. But there is a catch; the type parameters are erased after compilation. Also, Java doesn’t support co-variant types, which means, you end up type-casting / type-asserting sometimes, though it is not very often.
However, if a language doesn’t support generics at all, then you have to use type-casts / type-assertions every time you need to reuse code!
However, if a language doesn’t support generics at all, then you have to use type-casts / type-assertions every time you need to reuse code!
And that language isn’t Go.
I swear, this whole thing is messed up. Gophers tend to overstate the power of structural subtyping and people who haven’t written a lick of Go seem to dismiss it entirely. Believe it or not, Go actually does have a mechanism for compile time safe polymorphism. And yes, that means static types!
The quoted sentence of mine was deliberately brief. There are many ways for a language to facilitate code reuse. Structural sub-typing allows code reuse in a different way than generics. You can’t create type-safe and re-usable collections with structural sub-typing for example.
I never implied otherwise. I made the comment I did because there are others in this thread that are repeatedly stating inaccuracies about Go. Namely, that it has no mechanisms for code reuse. Given that context, it’s unclear exactly what you were implying.
And that brings me back to my point. People seem to think that just because Go doesn’t have their favorite blend of polymorphism that Go has none of it at all. Or at the very least, completely dismiss structural subtyping simply because it’s different from what you like.
As far as code reuse goes, structural subtyping is only one piece of the puzzle. Go also has type embedding and properly implemented first class functions. (Which sounds like a weird benefit to quote, but not every language gets lexical scoping exactly right. Go does.)
This is very misleading. From the long exchange in the Gist, to me this appears to be blown out of proportion. While I understand this guy’s point that it’s a “security” issue, it seems to be a feature that was implemented (as noted by DigitalOcean) to combat human error - oops I deleted that droplet, I need it back.
I’ve created a few test droplets and removed them, I never really paid any attention to the fact that I can still recover them up until a certain time. This just sounds like someone unhappy with the feature. If that level of security and assurance is needed, I don’t think a VPS provider is where you should be looking to store your data in the first place.
Having worked for a large German web hosting company (providing virtual servers, shared hosting including email, MySQL databases, the usual stuff), I can tell you that whenever you have a large amount of users, you will always have quite a few users that accidentally delete some of their stuff, and want it back. And, of course, the users paying the least for the service scream the loudest (which shows in negative comments on web forums and social media and definitely has an effect on the company’s reputation), so you always implement some mechanism to make it possible to recover data that was “deleted”. And you usually do so by locking or disabling said data (making it inaccessible to the user), marking when it was disabled, and then delete after a certain period, e.g. after a month. Usually, most users will complain within a day or two to get their stuff recovered. Of course that’s not exactly cool for people who expect their data to be securely deleted immediately, but it greatly increases user satisfaction whenever users make mistakes and click the wrong buttons.
Because he’s making up new meanings for words that already exist and trying to convince people that his definitions are better than the ones that people have been using for hundreds of years.
While that’s true, maybe the more relevant aspect is that for the first half-century or more of computer programming, there wasn’t a clear distinction. I do think that these new definitions have caught on, though, so we should use them.
Applying the literal definition of concurrency (things actually happening at the same time) to computer programs is something that’s only made sense recently, because it used to be literally impossible for a CPU to do more than one thing at a time. That doesn’t mean the literal definition of concurrency is some new, questionable, or unclear thing.
Well, in the early 50s there was already competition between “parallel” and “serial” computers, but the difference was that we were talking about how the bits in a machine word were treated when you were e.g. adding two words together. Parallel computers were much faster but also more expensive. Multicore shared-memory computers date from at least the 1960s: the CDC 6600 (1965) had a “peripheral processing unit” that time-sliced among 10 threads, context-switching every cycle, each with its own separate memory, in addition to having access to the central processing unit’s memory. And of course in some sense any computer with a DMA peripheral (which date from the 1960s if not the late 1950s) is in some sense doing more than one thing at a time. Symmetric multiprocessing dates from, I think, the late 1960s. Meanwhile, what we now know as “concurrency” was already in play—even before the CDC 6600’s PPUs, “time sharing” (what we now know as having multiple processes) was an active research topic.
So parallelism and concurrency have been around in more or less their current form for half a century. Nevertheless, I think the distinction Pike is drawing between the two terms' meanings is much more recent, like the last decade or two; but it does seem to have become accepted.
Rust is also slow because it is not built to be parallel. The language is concurrent, but this is a word game: in the past few years the terms have been redefined such that “concurrent” is (roughly) non-blocking cooperative multitasking (such is implemented by node.js and GNU Pth), and “parallel” is reserved for actually doing more than one thing simultaneously (whether on separate CPUs or separate cores of a single CPU). Rust’s memory model doesn’t help: there is no shared memory, and ownership types make fork/join parallelism difficult.
I don’t know that it’s due to confusion. There are a lot of self-taught people who might be running into this the first time. There will always be the next generation of programmers coming up who just haven’t learned this yet. We’ve all got to start somewhere and learn this at some point, and not everyone’s been programming since they were 15. I do like seeing it crop up, it’s a good video to learn from.
I actually had to keep rereading about parallelism vs concurrency. I got it when I read it, but after some time I found myself trying to explain it and failing to, and my thing is if I can’t explain it easily then I don’t understand it well enough, so I go back and read it again. When I want to implement concurrency, I still of course have to look up good ways to do that.
I don’t think there’s really much confusion. It just helps a lot to define terminology at some point, and go on from there. And that’s what I think Rob Pike does very well in this presentation, to show the abstract programming model of concurrency, and how an implementation (in this case, Go) can achieve parallelism by leveraging fundamental properties of the abstract model.
I might be wrong, but this probably has something to do with automatic kernel module loading. Although I have no idea why they had that enabled on a production server. Perhaps the OS had it enabled by default and they didn’t realize.
I did look at both of them, and neither of them were pure Go. They also didn’t serialise; I could sync to a file but without encryption, which defeats the whole purpose ;) There are certainly a number of areas where the DB structure could be improved, but I haven’t hit the use case yet where I need to.
On Linux, I use upstart and on OpenBSD, tmux. One of the things I don’t like about Go is how much of a pain point this has been, especially being used to daemon(3).
On Linux, I use upstart and on OpenBSD, tmux. One of the things I don’t like about Go is how much of a pain point this has been, especially being used to daemon(3).
I think OpenBSD has daemontools and runit. I would think either would be a better option than running in tmux/screen.
As far as it being a pain point, as a counter anecdote, I prefer running my services under daemontools/runit/upstart/systemd/launchd, as apposed to them daemonizing on their own.
It does, but most of things I’ve run on my OpenBSD server were just experiments and prototypes where it was more useful to have it running in the foreground when I wanted to do something. If it was running in production, I’d use runit.
How do Go developers deal with this in the real world? It’s understandable not being able to support different versions of the same library in a single project, but not letting one specify which version should be used sounds like a big oversight.
In the current production system I work on we require semantic versions. Dependencies are used across multiple systems and one makes an explicit decision to consume a new version. This means builds are always reproducible. And one always knows what’s going into their production build. And one can make backwards breaking changes and give other projects time to consume them.
JFTR, that’s how SoundCloud does it: http://peter.bourgon.org/go-in-production/ (scroll a bit down for the “Dependency Management” section)
They have extensive experience in using Go in production (AFAIK since H1 2012, maybe even earlier), so I would say the stuff they came up with is sound.
My approach was to create a Makefile to build my project. It would:
go get
any dependenciesgit checkout
to the dependencies tag/SHA that I indicated in the Makefile (since everyone used github, this wasn’t a problem)go install
go build
my projectThis isn’t hard for the 5-6 dependencies I had, and I have no concerns about it scaling - at least, so long as I didn’t have some git-based repos, some Mercurial, a handful of tarballs… but that doesn’t seem to happen in the Go world, so I didn’t worry.
The biggest thing about Go & dependencies that worried me was that the quasi-official stance was: “you should always check out to master, and the library developer should always ensure master builds & never breaks backwards compatibility”. Do I trust the core language devs to get that right? Sure. Do I trust most library developers? Not a chance.
That’s why I do think that a batteries-included approach would be better.
That’s a ridiculous stance to make anyways. All major versions should be new repos?
Yeah, that one really was a head scratcher. I think opinions on that have shifted since I last used Go, but I’m not sure.
ETA: Thinking back I vaguely remember two differing reasons on why that approach was taken. The first was “does it really matter, there’s so much churn right now that libraries come and go” which was understandable… packages were being born & dying before they really considered making a major release. The second was “the core language does it this way so we should too” which is a noble sentiment but likely unrealistic for libraries.
Any current Go users want to comment?
If you’re concerned about versioning, then you can use a service like gopkg.in to map a URI to a branch in your repo.
I maintain several popular Go libraries and I follow the same practice as Go’s standard library: never introduce backwards incompatible changes.
No. The beauty of using URIs for imports is that you can impose whatever scheme you want. There are several such services. One popular one is gopkg.in, which for example, lets you tie package URLs to specific tagged versions in your repository.
It quickly becomes painful to encode a lot of meta data in a URI. Package managers generally evolve to include various constraints, like checksums. But restricting to just the major means builds are still not reproducible, as I commented above to @moses.
I think it’s also an odd choice to allow URLs in source files. That means making a project local involves making code changes or doing extra work elsewhere to make it clear that the URLs in source are now meaningless. In general, I believe package information in source is a failed experiment, having had to deal with it quite a bit in Erlang.
You misread my comment. I gave you an example. Restricting to major version is a feature of that service. It is not a requirement of the build tool. There are other services out there that let you put a sha or a tag name in the URI.
It’s one of the many things I love about Go.
That experiment is flourishing in the Go community.
Can you depend on a specific SHA? In that case, your biggest problem is if the entire project is deleted, or someone erases the main-line version of history and replaces it with another, and nobody has ever forked the project ever. That seems reasonably safe. You could even signal to people which SHAs are considered “stable”. You could also build semantic versioning on top of git shas, as project meta-data in the git master tip. Seems a little hacky, but definitely workable.
In our build system we just use tags. And we are slowly moving over to gpg signed tags, only accepting tags from specific sets of developers.
For any external dependencies we actually mirror it to our local git repo first so it can disappear and we are OK.
I’d be nervous about tags because they’re much less indelible than SHAs. It’s easy to just make a tag point to another SHA, whereas it’s a hassle to remove a SHA. Having a local repo makes it better, but I think that it’s easy to lose a local tag if the remote has deleted the tag.
The article also mentioned using tags, although he sounded not that jazzed about them.
Unfortunately SHA’s have a horrible user interface. There is zero semantic information and that doesn’t scale very well. Might as well use git submodules in that case. Given that we mirror things locally and groups own specific repos, and we are moving towards signed tags (you know who redirected the tag), it’s a pleasant solution. It’s important to remember: your fellow developers might be dumb but they are not malicious. And if you’re on distributed version control, a backup exists on eery developers machine.
The OPs linked to the tag solution is actually rather poor. It only allows specifying the major version, which means builds are still not reproducible.
I agree with the authors point here:
The problem with the current state of affairs is it doesn’t work very well even with first party repos.
It’s actually pretty simple to do in practice. I clone dependencies to our own archive, then everyone works from this “known good” set of library versions. When someone wants to update a library they can update and test it before committing back to our archive. It’s incredibly straightforward really.
This does not suit the following use cases.
If you need multiple versions accessible at the same time. For example service A uses version v1 and service B uses version v2. Each service having their own complete copy of deps is possible but rather frustrating given that source control tools already provide a mechanism to have multiple versions accessible.
If you want reproducible builds. This can be a requirement from an external entity. But it’s also very handy for debugging. The current system makes
git bisect
difficult to use to track down errors.Knowing which version of libraries are in your release. If you always get whatever ‘master’ is, then between reviewing code and building a release you can get new commits, which is very confusing.
Can you elaborate on your second bullet point? Normally I’d see reproducible builds being an argument for copying dependencies into your own source control, since depending on some external entity would take that decision out of your hands.
I don’t understand your third bullet point - it seems like if dependencies are checked into your source control, then you don’t have to “always get whatever master is” - you can just use the dependency at the version it is on your branch.
I’m assuming that you are suggesting that one take every dependency had put it into a single monolithic repository.
That works for some organization, however in my experience it is antithetical to SOA.
With multiple services, often there is a foundation of common components and often these components are in their own repos. Sometimes these components have backwards breaking changes in versions and moving the entire organization in lockstep is costly. And if the foundation components are in other repos, reproducible builds are not possible unless you can specify the version to use.
One could put the foundation components into their services source tree, but that does not scale well, IME. It means code is duplicated all over the place and makes it harder to upgrade code when necessary. And version control tools already support the idea of versioning code so it seems unnecessary.
“Go is not good because it’s different from my favorite programming languages.”
I’m not convinced.
I disagree with this article. I think it does hit on a lot of what makes a language a good teaching language, simplicity, tooling, and concurrency support are all important components of a teaching language, but I think the inflexibility of Go is a little unfortunate for a teaching language.
I’m going to hedge than unopinionated languages are slightly better suited for computer science education since they allow a class to explore each different bias without suffering from the language’s opinions in the process. That’s part of what makes SICP/Scheme so great! Everthing is equally simple/painful and you end up with a fairly unbiased view of what each paradigm is like.
Of course in a “real world setting” some biases can be incredibly helpful by enforcing consistency throughout the ecosystem.
The type-theorist in me can’t help but say, Go’s static type system is not what you should introduce as a canonical view of what static typing is like, SML would be far better for that. It’s far too easy to chafe against Go’s type system’s inherent restrictions while trying to accomplish mundane tasks.
Type theory is important, but it’s not the be-all, end-all topic of computer science education. Arguably—and I argue that—a programming language with a restrictive type system is in fact better for teaching a lot of computer science fundamentals.
Of course not, but types are an important part of computer science and at some point everyone should be given a brief overview of type systems. This shouldn’t be done in Go.
As far as being better for learning, at a certain point yes. No one would say to learn programming with Agda! :) But you also don’t want such a restrictive system where it stops you from writing many common things safely. I’d rather spend a few hours explaining parametric polymorphism then explain why segfaults or casting errors are happening.
Go programs that don’t use the unsafe package can’t segfault. (EDIT: To be more correct, Go prevents all segfaults that are caused by memory access violations. Go will let you dereference a null pointer, but the result is a helpful stacktrace.)
There are plenty of opportunities to use polymorphism in Go without resorting to runtime casts.
Is this not a segfault?
http://play.golang.org/p/PczjVXz2oN
I’d call that a null pointer dereference. The Go runtime probably has its own handlers to recover from the actual segfault and present a nice stacktrace to the user.
I generally think of segfaults as memory access violations. e.g., Reading or writing memory that your program doesn’t have access to. Go will not let you do that by virtue of its type system (sans
unsafe
or cgo).I’m sure reasonable people will disagree. But the key point to take home is that Go has strictly better memory safety than C.
It’s a normal panic, so it can be recovered: http://play.golang.org/p/cl1F2Y9YI0
I was thinking of C in that case, it’s the only other statically typed language I see used much anymore without parametric polymorphism.
And that’s true, and plenty of situations where you still have to :)
Does this mean that ultimately, many of us will be bitter about how lousy Go is, and wish for something better? That’s a bummer, Go is really quite nice. But it’s awesome because eventually, there will be something so fabulous it makes Go look like garbage. :D
Also regarding Go and generics, I see the usefulness/complexity thing cited everywhere, but how does Go as a language help avoid needing generics? Slices, maps, and channels are generic, is that really all you need to do things?
I’ve already seen many things which make it look like that.
No, and so Go requires you to lie to the type system (i.e. cast) when you want to do things generically.
You can’t just throw Go under the bus saying it “looks like garbage” without naming the languages you feel are superior.
The above are much more capable of writing reusable functions. Some are again much more capable than the others, but the bar is pretty low!
Whining about a language is a waste of everyone’s time. If you have something useful to contribute, do so. Plenty of people have given detailed, reasoned, substantial accounts of their experiences with Go, good and bad. Maybe it wasn’t clear, but that sort of answer is the kind I was inviting.
Criticising things is an important thing to do. Being smug about something is not useful.
I’d argue that constructively criticizing things is an important thing to do. If criticism isn’t constructive, then you have little hope of the right people reconsidering what they think they know.
Your comments in this thread are certainly not constructive.
Taking this discussion meta, how would you classify your comments as not smug? I agree that criticism is important, but I don’t see how you have done anything but call Go garbage and list languages with generics of some variety.
Criticizing things is indeed an important thing to do, but you haven’t succeeded in doing it so far.
Go supports a different form of compile time safe polymorphism via structural sub-typing. No lying necessary.
I agree that Go supports a single, limited form of polymorphism. Very quickly you come to a point where you have to lie.
In what situations? When you are writing your own generic containers you run into a problem, but otherwise when you you actually need more than an interface? The built in containers are quite capable for most applications. If you truly need a specific data structure, how often are you going to need to use it on any possible type? A database may use a skip list for good concurrency working with tuples. A text editor may use a rope for working with strings. I would argue that using those specialized data structures generically in every situation is marginally useful.
The Go developers argue that generics are not adequately useful to be added, and I haven’t seen a compelling example that indicates otherwise, do you have one?
I want a Tree data structure, maybe I’ll only use it on 3 different types of values in my application but why:
The 3rd point might not seem like a problem but it has huge implications, documented in a concept called parametricity.
It’s not just data structures. Functions shouldn’t have to be unnecessarily specialised. If I need to write a function like so:
Why should we care about whether it’s an int or a string or a gobbledok? Our alternative is to use an interface:
But now we know nothing about the output type when we go to use it. We’d have to cast - will that work? Only after knowing exactly the definition of the function do we know for sure.
I hope I don’t have high standards by refusing to use a system which doesn’t allow using types as documentation or which asks for information it doesn’t need nor use.
I am familiar with all of the concepts you described. However, Go emphasizes simplicity. I haven’t ever heard someone claim that they encountered a serious problem with lack of total parametric polymorphism that so critically impeded their ability to write their application that they would trade away the simplicity of Go. Go was designed for a specific purpose, to write concurrent applications that are easy to reason about, both in terms of their concurrent aspects and performance as a whole.
This is really more of a philosophical point than a practical one. While such a feature is nifty, it’s not that useful. We can talk all day about how neat monads are (really neat), or how map is so more elegant than a for loop, but at the end of the day when you are writing an network server chewing through tens of thousands of requests per second, reasoning about the exact performance characteristics of a for loop is easier.
Why? Why not use a map? There are plenty of ways to use trees besides maps, ways that involve custom, non-generic logic to create and maintain the tree to reap some performance benefit or other desirable characteristic.
You have different priorities. You aren’t wrong for your priorities, and your priorities don’t make all other priority sets wrong. You seem to be claiming that the lack of generics is so crippling that Go is trash not worth using, by anybody. That’s how I read your tone anyway. But that flies in the face of reality: plenty of people are accomplishing significant feats with Go, without generics. I originally asked what language features—or possibly characteristics of the problems Go is solving—accommodate the lack of generics. Why do the developers feel they aren’t useful enough to add? Defending Go has made me come up with concrete reasons, and I have answered my question for myself.
I don’t think so. Go is about making the simple decision, not the ultra nifty yet often unnecessarily complex one. Make the simple choice, and you don’t have to lie. That puts Go at the opposite end of the spectrum from languages like Haskell, a trait that a lot of people consider a feature.
Also, I should note that around 4 years ago I too thought Go was a sensible idea and I was using it for things but I noticed the problems and started using more capable tools.
No, code duplication is a very practical issue - it’s ridiculous to say otherwise.
Parametric polymorphism is not about for loops.
Do you see the irony here? Making a false compromise in the name of performance but then sacrificing it by using a Map?
Yes, I refuse to use a language which doesn’t allow code reuse and I’ll point out the silly false sacrifice made for “simplicity” reasons. I think more should have the same standards.
I have no tone.
People are accomplishing things despite the lack of generics, by not abstracting and not reusing code.
I have seen the answer, too. They put up with code duplication and don’t allow abstraction.
Parametric polymorphism as a type system feature has been around for about 30 years now. It’s extremely simple to describe and implement.
Go’s “simplicity” (i.e. lack of parametric polymorphism) is a silly thing to trade for. Your application becomes more complicated because of no abstraction and no code reuse so that the language spec doesn’t spend a few paragraphs defining parametric types!
I just searched Twitter for #golang to see what other strange things were being said and I immediately saw this monster:
https://clipperhouse.github.io/gen/
The solution for lack of generics? A code generation tool!
https://github.com/clipperhouse/gen/blob/master/templates/projection/projection.go
This is simplicity?
https://github.com/joeshaw/gengen
C++ templates are just fancy code generation. Well, fancy is a stretch. Built in though.
Having a type system that handles generics is a better solution for generics, certainly. But code generation isn’t horrific in of itself. It’s one of many tools in the world of programming tools, and is fairly easy to work with. Protobufs use code generation to make parsers and interfaces, and that’s fine in my opinion.
I read: generics aren’t necessary, code generation rather than generics is fine.
Very strange ideas!
For certain kinds of applications, sure. Because for certain applications generics aren’t that important.
I can’t think of any nontrivial applications for which generics wouldn’t be useful. Besides code reuse, parametric polymorphism gives you better reasoning about what your code is doing via parametricity/free theorems.
Now, I’m not saying that generics won’t be somewhat useful for a lot of these. Of course they would in some parts of code. But most of those use cases are covered with Go’s typing system. The amount of times you need to be able to specialize some code to literally any type ever is not that high. An interface will do just fine.
Seriously?
Then why are C++ templates are such a mess? And why is it that Java generics are implemented with type erasure, don’t work with primitives, and aren’t safe with arrays?
You are right though, it’s even said on the Go site that some code duplication is the preferred solution to certain things. Many people find that a small amount of duplicate code is acceptable in certain circumstances, otherwise the industry standard would be Haskell. For an application that necessarily relies heavily on generic types, I wouldn’t pick Go.
Exactly zero abstraction is ideal. Main is the only function anyone needs.
I truly hope and believe this is not a position shared by many!
A big part of the reason why generics are such a mess is because of the intersection of subtyping and parametric polymorphism. IMO, get rid of subtyping and use row polymorphism for records.
The point is, generics are actually pretty hard and complicated, even for people who understand them.
CS 101 students around the world are pioneering this strategy!
Your examples only show that retrofitting or piggy-backing on features to get parametric polymorphism has always resulted in sadness. Implementing generics as a goal is extremely simple - I’ve done it many times!
Also, from what I’ve heard from developers on the Rust team, making everything work just right is actually extremely challenging for them. The common complaint I hear is that the type system touches everything, so working with it is difficult.
Link to repo?
http://brianmckenna.org/blog/type_annotation_cofree
Nice, well written. Although everyone in my college Programming Languages class wrote a type annotator for a similarly trivial “language” during lab one day in about an hour. A real programming language is quite a bit harder.
He listed earlier many “real programming languages” with parametric polymorphism. It’s not exactly a colony on Mars. This is something we’ve known how to do for literally decades. And saying that languages shouldn’t have parametric polymorphism because you personally think it’s difficult to implement is like saying cars shouldn’t have transmissions or 4-stroke engines for the same reason.
Not being able to write generic data structures in Go without casts is awful and basically inexcusable for a modern language that wants to be taken seriously.
Which one has a perfect system? The Go developers won’t introduce one because they can’t see a clean way to do it that jives with the rest of the language.
Because it’s difficult to get right, for a lot of people, including the very capable people on the Go team. Side note, Tesla is doing well.
I mean, it kinda sucks. I’ve never been that bothered. A lot of companies are writing critical infrastructure in Go, and the word critical implies they are taking it seriously. Writing correct, fast, and maintainable Go code is easy, and a lot of people like that.
There are no casts in Go. You can convert primitive types (such as string to []byte or int to float64), but when working with interface{}, you can’t convert, you can only do type assertions. Type assertions will fail if the asserted type doesn’t match, so you still have type safety at runtime. You still can’t hammer a square piece through a round hole.
There is no such thing as runtime type safety. Go’s “type assertions” are in every way a type cast.
From the specification:
(Emphasis mine.)
So I’m afraid you are mistaken: type assertions are not at all like a type cast – at least not like a type cast in a weakly typed language like C, wherein they instruct the compiler to treat x as being of type T regardless of what its actual type at runtime is.
Now type assertions in Go do allow you to compile code that is not always correct, including code that is never correct. Nevertheless, they do not allow you to run code that is not correct, and thus are safe, much unlike type casts in a language like C.
Just because you can write this in Java:
And it won’t crash, doesn’t mean you’re not casting.
I was responding to your claim that a) there is no such thing as runtime type safety and b) type assertions in Go are therefore not safe.
Now in what way is your Java code example unsafe? Will it ever run incorrect code? Not as far as I can tell.
It is clear that Java’s casting does allow you to compile incorrect code, like Go’s type assertions do. I’ve already said so. (Though arguably the code in your example is not even ever incorrect.)
So if you were trying to make a point about my reply, I can’t see what that is.
Yes there is. Compare the result of running
python -c '"1" + 1'
andphp -r '"1" + 1;'
. The difference is a result of Python having more type safety than PHP, which is checked at runtime.The difference has nothing to do with types.
This is a form of safety, but it’s not static type safety. From Types and Programming Languages:
So Go will let you know at runtime if you’re casting between two incompatible types, and this is a form of safety, but not static type safety. Similarly, Python won’t let you index past the end of a list, and this is a form of safety, but, once again, not type safety.
Pierce continues:
In the end, this is mainly an argument about definitions, but it’s important to get these definitions right when discussing language design tradeoffs, especially when these definitions clue us in to real differences between languages.
For what it’s worth, I think Go’s lack of parametric polymorphism is absolutely crippling.
[1] Pierce, Benjamin C. (2002-02-01). Types and Programming Languages (Page 28). MIT Press. Kindle Edition.
[2] Ibid.
I don’t understand why you’re saying this. I’m not talking about static type safety. I’m talking about type safety at runtime. Puffnfresh has claimed that no such thing exists. I’ve provided a counter-example.
This is basically a conclusion drawn from the distinction made between strongly and weakly typed languages. It’s not like I’m pulling this out of thin air.
For what it’s worth, I disagree.
Both PHP and Python only have a single type, so it’s kind nonsensical to claim that one has more type safety than the other at least using a TaPL definition of the word “type”. What they do have is different runtime semantics for handling values with different runtime tags, which colloquially these languages unfortunately refer to as “types”.
I cannot see any point you’re making other than to indulge in a definition war. If we were at ICFP and I started talking about the type safety of Haskell and Python as if they were the same thing, then I’d applaud your correction. But in a wider audience, it’s quite clear that comparing the type safety of languages like Python and PHP is a perfectly valid and natural thing to do.
It’s not a word game, my point is that I question whether this alternative definition of the term actually gives rise to a well-defined comparison or ordering at all. What could it possibly mean for a programming language to “go wrong less” than another programming language if they both admit an infinite number of invalid programs? Hypothetically If I have a language Blub can I always form a language Blub' that is less type safe than Blub and what would I have to alter to make it so? This kind of redefinition just leads to nonsense.
My example up-thread contrasting Python and PHP is absolutely not nonsense. The distinction between them is a result of a difference in each language’s type safety.
I mean, hell, if you try running the Python code, you’ll get an exception raised aptly named TypeError. What other evidence could possibly convince you?
It is a nonsensical notion if you take it to it’s logical conclusion, that there exists this alternative notion of “type safety” in terms of their runtime semantics that we can compare languages based on. Sure, you can contrast the addition function for a fixed set of arguments, but how do you generalize that to then make a universal claim like “Python has more type safety than PHP”.
If it’s a well-defined concept, then suppose I gave you [Python, Fortran, Visual Basic, Coq, PHP] what would be the decision procedure in the comparison “function” used to order these languages for your notion of type-safety?
Why do I have to take it to its logical conclusion? The distinction exists today. Python is strongly typed and PHP is weakly typed. These descriptions are commonly used and describe each language’s type system.
The addition function? Really? That’s what you got out of my example?
Try this in a Python interpreter:
What do you get? A TypeError!
Who says it has to be well defined? I certainly didn’t. I’m merely drawing on the established conventions used to describe properties of programming languages.
I still truthfully don’t understand what your central point is. Are you merely trying to state that there exists some definition of type safety for which it is nonsensical to ascribe to dynamically typed languages like Python or PHP? Great. I never contested that. But that does mean you’re just playing word games, because it’s patently obvious that that definition isn’t being invoked when discussing type safety of precisely the languages that your definition of type safety doesn’t apply to.
There is no such thing as strong or weak typing for the same reason there is no such thing as runtime type safety. They’re not well-defined.
OK. So you’re playing word games. Just as I thought.
Concepts such as strong and weak typing exist and they are used to draw meaningful comparisons between languages all the time. So you covering your ears and simply saying this doesn’t exist is a bit ludicrous.
They can’t and shouldn’t be used to draw conclusions about programming languages at all unless they have a precise meaning, which they don’t. The fact that they are used all the time to make specious arguments doesn’t make them any more accurate or precise, that’s just a consensus fallacy.
Except they are. I’ve given examples to support my claim. You’ve done nothing but appeal to your own definition.
I’ve committed no fallacy because I’ve drawn no conclusions from the fact that there is a consensus. I’ve merely pointed out that there exists a consensus. (Which you absurdly claimed doesn’t exist!)
No really, there isn’t a consensus on these terms. Even the Wikipedia article on the terms “weak and strong typing” prefixes everything it says by saying they have no precise meaning and that many of the proposed definitions are mutually contradictory, and should be avoided in favor of more precise terms. Probably this answer is the best explanation of why the terms are themselves completely meaningles, they’re just used for sophistic arguments to justify preconceived bias about language features.
Which is why I claim the defining this new term “runtime type safety” in terms of these other ill-defined terms is fallacious.
It boggles my mind that you think I’m trying to precisely define anything. I’m not. I’ve merely pointed to the facts: the terms weak typing and strong typing have meaning, and are used to compare and contrast language features in a way that effectively communicates key differences. These differences relate to the way types are handled at runtime.
I never once said that there weren’t any problems with these terms or that they weren’t vague.
Welcome to the subtleties of human language. You’re arguing about what should be. I’m pointing out what is.
If only Hume were here, he’d be pointing and going “This!”
There are two kinds of type safety: Static types and strong types.
What you are describing is strong types; safety at run-time. Static types go beyond that; they give safety at compile-time.
Now, in Java, generics are implemented with static type guarantee. But there is a catch; the type parameters are erased after compilation. Also, Java doesn’t support co-variant types, which means, you end up type-casting / type-asserting sometimes, though it is not very often.
However, if a language doesn’t support generics at all, then you have to use type-casts / type-assertions every time you need to reuse code!
And that language isn’t Go.
I swear, this whole thing is messed up. Gophers tend to overstate the power of structural subtyping and people who haven’t written a lick of Go seem to dismiss it entirely. Believe it or not, Go actually does have a mechanism for compile time safe polymorphism. And yes, that means static types!
The quoted sentence of mine was deliberately brief. There are many ways for a language to facilitate code reuse. Structural sub-typing allows code reuse in a different way than generics. You can’t create type-safe and re-usable collections with structural sub-typing for example.
I never implied otherwise. I made the comment I did because there are others in this thread that are repeatedly stating inaccuracies about Go. Namely, that it has no mechanisms for code reuse. Given that context, it’s unclear exactly what you were implying.
And that brings me back to my point. People seem to think that just because Go doesn’t have their favorite blend of polymorphism that Go has none of it at all. Or at the very least, completely dismiss structural subtyping simply because it’s different from what you like.
As far as code reuse goes, structural subtyping is only one piece of the puzzle. Go also has type embedding and properly implemented first class functions. (Which sounds like a weird benefit to quote, but not every language gets lexical scoping exactly right. Go does.)
This is very misleading. From the long exchange in the Gist, to me this appears to be blown out of proportion. While I understand this guy’s point that it’s a “security” issue, it seems to be a feature that was implemented (as noted by DigitalOcean) to combat human error - oops I deleted that droplet, I need it back.
I’ve created a few test droplets and removed them, I never really paid any attention to the fact that I can still recover them up until a certain time. This just sounds like someone unhappy with the feature. If that level of security and assurance is needed, I don’t think a VPS provider is where you should be looking to store your data in the first place.
Having worked for a large German web hosting company (providing virtual servers, shared hosting including email, MySQL databases, the usual stuff), I can tell you that whenever you have a large amount of users, you will always have quite a few users that accidentally delete some of their stuff, and want it back. And, of course, the users paying the least for the service scream the loudest (which shows in negative comments on web forums and social media and definitely has an effect on the company’s reputation), so you always implement some mechanism to make it possible to recover data that was “deleted”. And you usually do so by locking or disabling said data (making it inaccessible to the user), marking when it was disabled, and then delete after a certain period, e.g. after a month. Usually, most users will complain within a day or two to get their stuff recovered. Of course that’s not exactly cool for people who expect their data to be securely deleted immediately, but it greatly increases user satisfaction whenever users make mistakes and click the wrong buttons.
What’s the connection to android?
It isn’t immediately obvious.
In the tags.
Because he’s making up new meanings for words that already exist and trying to convince people that his definitions are better than the ones that people have been using for hundreds of years.
While that’s true, maybe the more relevant aspect is that for the first half-century or more of computer programming, there wasn’t a clear distinction. I do think that these new definitions have caught on, though, so we should use them.
Applying the literal definition of concurrency (things actually happening at the same time) to computer programs is something that’s only made sense recently, because it used to be literally impossible for a CPU to do more than one thing at a time. That doesn’t mean the literal definition of concurrency is some new, questionable, or unclear thing.
Well, in the early 50s there was already competition between “parallel” and “serial” computers, but the difference was that we were talking about how the bits in a machine word were treated when you were e.g. adding two words together. Parallel computers were much faster but also more expensive. Multicore shared-memory computers date from at least the 1960s: the CDC 6600 (1965) had a “peripheral processing unit” that time-sliced among 10 threads, context-switching every cycle, each with its own separate memory, in addition to having access to the central processing unit’s memory. And of course in some sense any computer with a DMA peripheral (which date from the 1960s if not the late 1950s) is in some sense doing more than one thing at a time. Symmetric multiprocessing dates from, I think, the late 1960s. Meanwhile, what we now know as “concurrency” was already in play—even before the CDC 6600’s PPUs, “time sharing” (what we now know as having multiple processes) was an active research topic.
So parallelism and concurrency have been around in more or less their current form for half a century. Nevertheless, I think the distinction Pike is drawing between the two terms' meanings is much more recent, like the last decade or two; but it does seem to have become accepted.
A current example with a little discussion of this usage, C. Scott Ananian talking about Rust:
Concurrency has been around long before multiple cores were. User input, file input, displaying output, punching cards, etc.
It’s not a recent concept.
You might want to remove your downvote; review my longer comment above.
I stand corrected.
You, sir, are a gentleman.
I don’t know that it’s due to confusion. There are a lot of self-taught people who might be running into this the first time. There will always be the next generation of programmers coming up who just haven’t learned this yet. We’ve all got to start somewhere and learn this at some point, and not everyone’s been programming since they were 15. I do like seeing it crop up, it’s a good video to learn from.
I actually had to keep rereading about parallelism vs concurrency. I got it when I read it, but after some time I found myself trying to explain it and failing to, and my thing is if I can’t explain it easily then I don’t understand it well enough, so I go back and read it again. When I want to implement concurrency, I still of course have to look up good ways to do that.
I don’t think there’s really much confusion. It just helps a lot to define terminology at some point, and go on from there. And that’s what I think Rob Pike does very well in this presentation, to show the abstract programming model of concurrency, and how an implementation (in this case, Go) can achieve parallelism by leveraging fundamental properties of the abstract model.
wat
Linux seems to have a few “touch the network stack and magic will happen” edges. This automatism of kernel module loading vaguely reminded me of this: http://backstage.soundcloud.com/2012/08/shoot-yourself-in-the-foot-with-iptables-and-kmod-auto-loading/
I might be wrong, but this probably has something to do with automatic kernel module loading. Although I have no idea why they had that enabled on a production server. Perhaps the OS had it enabled by default and they didn’t realize.
This might refer to pycurl’s Curl objects (assuming that bitly uses python as the rest of their blog seems to suggest content-wise).
I did look at both of them, and neither of them were pure Go. They also didn’t serialise; I could sync to a file but without encryption, which defeats the whole purpose ;) There are certainly a number of areas where the DB structure could be improved, but I haven’t hit the use case yet where I need to.
On Linux, I use upstart and on OpenBSD, tmux. One of the things I don’t like about Go is how much of a pain point this has been, especially being used to daemon(3).
I’m glad it was interesting.
BTW, there is a pure Go implementation of LevelDB available: https://github.com/syndtr/goleveldb
I think OpenBSD has daemontools and runit. I would think either would be a better option than running in tmux/screen.
As far as it being a pain point, as a counter anecdote, I prefer running my services under daemontools/runit/upstart/systemd/launchd, as apposed to them daemonizing on their own.
It does, but most of things I’ve run on my OpenBSD server were just experiments and prototypes where it was more useful to have it running in the foreground when I wanted to do something. If it was running in production, I’d use runit.