I use Go, and have similar feelings to the OP’s friend.
There is a section in there about simplicity – how people tend to like simplicity in languages. I feel like that’s correct but incomplete. People like to start with simple languages, I think because it means they can wrap their heads around the whole thing; but once people start having not-simple problems, and the language becomes a painpoint, they want to add new features to make the language more suitable. This is a natural thing to do, since the new feature seems ‘simple’ and the language is already known, it’s easy to add the new feature and absorb it into the language. Give this a few years though, and you start to have languages like Java – which started out as a ‘simpler C++’, and ended up being a very wide and complex language. Similar with C and C++; or Perl and Ruby; and so on.
Point being, ‘Simple’ is a luxury of the early adopter, problems tend not to remain within the scope of a simple language for long, and over time the complexity of any language will grow (in terms of not only core language features, but libraries and ecosystem). That’s not bad, but I think it’s inaccurate to say Go is ‘simple’ and D is not; it’s better to say Go is new, and D is less new. It’s therefore natural that Go would have a smaller feature set than D, it’s simply had less time to accrue it’s featureset.
It doesn’t end there either – reframing in that way, the right question to ask with respect to complexity is more obvious – it’s not about present simplicity or complexity, but about the general direction of features and future of the language. Go, for instance, is tracking in a direction which emphasizes it’s suitability as a language which is valuable for basic systems programming for everyone – lacking a better term, it feels very populist. It’s not going to do anything astoundingly new, it’s just going to package good ideas in a way that is accessible. D, on the other hand, seems to be interested in providing a swiss-army knife of tools and techniques. It feels a lot more like C++ in this regard. If you have a problem, liberal application of D will solve it, in six ways, and leave you with a corkscrew and that little fucking screwdriver that always gets lost. They both have appropriate domains of use, and both should be considered on those merits, but I think it’s inaccurate to choose Go simply because it’s ‘simpler’ – it won’t remain that way, and it’s also ignoring that you don’t have to have complete understanding of a tool in order for it to be useful.
Go is ‘simple’ and D is not; it’s better to say Go is new, and D is less new
Hmm. Not sure that is true, I’m pretty sure I have heard / seen statements by the Go devs that they explicitly want the core language simple and put any complexity needed in libraries.
I get the impression the D devs have started with the C/C++ attitude “The language definition shall permit such efficiency that there must be no room for another language underneath it.”
Alas, silently C/C++ added, “We will never burn a CPU cycle to save a programmer from his stupidity” and “We will never make a decision that will make some large corporate compiler or CPU vendor unhappy with us.”
D, bless ‘em, have said “There are certain common stupid thing all programmers do and have done and will do again…. let’s design the language to catch as many of those at compile time, or even at runtime even if it does mean burning some CPU cycles.”
D, bless'em, have said, “All those stupid things where the C/C++ committee couldn’t make up it’s mind because it would offend some vendor…. let’s make a reasonable decision.”
D, having taken on the C/C++ mantle, hasn’t stopped there, they seem to have said, “Where a dynamic or functional language has a clearly better idea, a better paradigm than C/C++, and there is no reason why we can’t cleanly adapt it, why not?
D, has taken a hard look at the many sharp and painful corners of C++, and slapped them right.
With D, like Ruby, you can metaprogram and monkey patch as much as you like…..
….but 99.9 time out of 100, you don’t.
Something in the standard library has done the scary stuff and wrapped it in an easy to use, easy to understand, (and in D, compile time typechecked) interface.
Did you know that Ruby programmers use metaprogramming and introspection and monkey patching every day….?
Mostly are blissfully and uncaringly unaware that they are.
The stdlib just gives them wonderful happy feeling that “Ruby is Easy to use”.
Ruby notoriously burns a huge amount of runtime cycles to give this feeling.
D has aimed at a trade off point of burning compile time cycles rather than runtime, and permitted maximum ease for minimum runtime cost.
This means a developers view of D is slightly more complex than Ruby. But I suspect it is more a matter of relaxing a little and trusting the library.
Panic! What type does Regex “match” return? I can’t see it, it’s a voldemort type! Shhh, relax, you don’t actually need to see it. You will never declare one. “auto” is all you need.
I’m not sure I’m convinced, but I think your argument isn’t bad, with one caveat. Speaking as a ruby programmer, while it’s not my experience that Ruby programmers use ‘metaprogramming’ every day, depending on your definition. If you mean, ‘consume code which uses metaprogramming’ sure – if you mean ‘write code that metaprograms’, I’d venture a Ruby programmer does it about as often as a Python programmer, it’s not nearly as popular as people make it out to be. As far as monkey patching, I’ve used it precisely once in my career, and that was to patch out a bug which I subsequently submitted to the author of the library as an actual patch, so that I could remove the monkeypatch. That (to my knowledge) is the only real useful thing about monkeypatching in the while.
Based on your statement that ‘Most are blissfully and uncaringly unaware that they are’, I assume you mean the former, in terms of metaprogramming ‘consumption’. To that I say, I think it’s inaccurate, most seasoned Ruby programmers know that metaprogramming happens. We’re just not scared of it – it’s part of the language and we understand how to use it as a tool to help us write code.
I will say that I think your assessment of the various design philosophies seem accurate – but don’t forget that Java started out as a ‘Simpler version of C++’ – just like Go is a simpler version of languages that have come before it. It will remain simple till it accrues enough features that it is not simple.
I’m not trying to pass judgement on the language choice, either – I think language choice is essentially entirely personal. The vast majority of languages are basically equivalent in terms of ability, and it’s a function of your personality, and your team’s personality, to determine which one is best. My point was that ‘simplicity’ for Go is as much a function of age as it is design. As it has been in the past, and will be in the future.
I mean “uses standard library code that can only achieve perform it’s documented functionality by doing metaprogramming internally”, however, does so so simply and painlessly you have to stare hard at the library source to see where it is doing it.
My point was that ‘simplicity’ for Go is as much a function of age as it is design.
Perhaps you are right, I thought it was the stated intent that it wasn’t so, but I can’t find the link right now. So since I can’t find the link, I concede.
This is probably the link you were looking for:
Thanks, that is indeed one of them. I think he also stated that sentiment in a video as well.
I’m a bit bothered by how you speak of the C/C++ committe or the C/C++ paradigm or the C/C++ mantle.
C and C++ are different languages. Radically different. A lot of C I encounter in the wild doesn’t even compile with a C++ compiler. Sure, a lot of C does compile under C++, but that’s really an old marketing trick that doesn’t apply anymore, especially since C++11.
Stroustrup has publically stated he would prefer C++ to remain compatible with C.
Certainly in the “market places of all available languages” C and C++, and even in the internals of several compiler implementations, they have more in common than they have that is different.
Two things come to my mind. I don’t agree or disagree with what you wrote. Just thinking out loud.
Firstly, the principle of personal mastery as outlined by Design Principles Behind Smalltalk resonates with me:
If a system is to serve the creative spirit, it must be entirely comprehensible to a single individual.
If it’s true that languages get more complex over time then it won’t be possible to follow this principle in the long term. We may try to look for old languages that are still simple. Can we count C as simple? What about Fortran (I don’t know whether/how it’s currently used)? Smalltalk? ML?
Secondly, the language itself can provide enough extensibility to solve new painpoints. Lisp macros come to my mind. I also heard that Tcl is extensible in a similar manner but I have absolutely zero experience with it.
That is an interesting idea, the notion of ‘entire comprehensib[ility]’; but I think I have a different interpretation of what that means. For something to be comprehensible, it merely has to have the ability to be understood – you don’t actually have to understand it. For instance, I would claim that Mathematics is a system which serves (at the very least my) ‘creative spirit’ – and yet I would happily say I don’t understand the entire body of mathematics. However, I think it is reasonable to say that I understand some of mathematics, and the rest is comprehensible to me – that is, I have the ability to, but not the actuality of, understanding it.
The inevitability I propose in terms of rising complexity in languages isn’t about them becoming so complex so as to be unusable. Indeed, this inevitability means that we have to strive to make languages usable despite rising complexity. I’m interested to see how Go handles this, as many languages thus far have failed to remain usable to new programmers over time. Essentially my argument is this – the life of a language is described by the following equations:
L(0) = k
L'(t) = newcomers(t) - outgoing(t)
Where newcomers(t) is a function which takes language complexity and language usefulness and produces the number of people newly interested in using the language, and outgoing(t) takes the complexity and usefulness of all other languages versus the complexity and usefulness of this language and gives you the number of people leaving the language. L(t) is thus the number of people using the language at any point in time. A ‘successful’ language might be defined as one where the integral from 0 to infinity of L(t) is large, and a language might be ‘more successful’ if it’s integral is larger than another’s. newcomers and outgoing are relatively opaque functions, we don’t have a strong analytical model for either, but we can guess at their structure, and my suspicion is that a language where the integral from 0 -> infinity is large will require some method to ameliorate the inevitable rising complexity of the language.
Reifying this bit of abstract nonsense – look at Lisp, Smalltalk, and C. All three languages are roughly equal in age, but for some reason C has retained the most users for the longest time (it’s integral is bigger than the others). I think you’re on the money with your curiosity about the other languages, but we don’t (and can’t) only consider their relative simplicity. Their popularity might also be due to better retention (potentially against the will of the user; i.e., vendor lock-in), or having such a rich featureset / wide scope of use that they outstrip the disadvantage of their relative complexity. I think that’s probably more of why a C – despite being a relatively complex language independent of the others – has a powerful enough featureset to help it remain attractive do newcomers. Complicating these calculations is that complexity isn’t an absolute measurement – C is simple in comparison with C++ or Lisp-with-CLOS, but wildly more complex than, say, Python. Python on the other hand has limitations that C doesn’t; yet some of those limitations (Garbage Collection means less control over memory usage, for instance) are also simplifying advantages (no manual memory management). So that newcomer and outgoing equation becomes increasingly complex as it needs to have a very wide field of view in order to account for all the write things that dictate who will use what language when. It’s an interesting area to try to model in a meaningful way, because the underlying drivers are clearly very rich, but also incredibly opaque to analysis.
As far as extensibility, I think it’s actually a moot point – any sufficiently useful macro becomes a de facto part of the language, at least for the person using it. It’s nice from one perspective because that person can now deal with some of the complexity piecemeal, but a newcomer still needs to learn the commonly used macros and how to write them, meaning it becomes somewhat more difficult to adopt the language because of the ‘big feature’ of macros (which are pretty complex) and the stream of ‘small features’ of commonly used macros they either must write, or import. Ultimately, I think it’s a bit of a wash in terms of total complexity, but it may be worth noting that languages with Lisp like macros are not super popular, so perhaps it’s a detriment?
In any case, I think the crux of this argument is that complexity in a language is a function of several components, not only language complexity, but environmental complexity in the form of libraries, best practices, and so on. Even if something like Go can keep it’s core language simple, things like best practices and the library-space will – invariably – evolve (and may even evolve faster because the language itself is unmoving and people want to resolve problems that the language doesn’t provide solutions for directly), so complexity will rise. I think a language like D is taking a different tack, rather than trying to minimize longterm complexity, they’re trying to maximize the featureset – occasionally at the expense of complexity. This makes the language more appealing, even if it’s more complex to learn. I think the best strategy to long term usership, though, is the ability to compartmentalize. If I can approach a language like I approach the whole of Mathematics – field by field, feature by feature – then my perceived complexity is much lower than the whole offering of the language. Once I’ve learned some part of the language, learning other parts appears less complicated by virtue of my being able to ‘translate’ some of my knowledge across. In math, for instance, If I start by learning about some component of Abstract Algebra – say Groups – then I can easily translate that knowledge into Rings (which are just Groups with an additional operation and some additional rules), and vice versa. In programming, it seems to me that many languages expect you to treat them as a whole unit – you learn Go, not the various little bits of Go useful for one kind of task, that remain isolated from these other bits of Go useful for another kind of task. It may be that there are only a few delineations, but you can (I think) drastically reduce complexity by partitioning your language like this. Languages with metaprogramming tend to have a distinction like this, you learn how to program, then how to metaprogram. I don’t know of many languages that successfully partition beyond some very basic level like this, but I suspect if someone could design one, they’d have a real hit on their hands in terms of learnability.
Multiparadigm (aka complex) languages are almost always misunderstood and caricatured by the Internet because they require skill, experience, and taste to use properly. (I’m critiquing the response to these languages, not defending bad design decisions.)
For as much as developers like toys, I suspect they don’t like languages that have features they can’t appreciate, or, worse, make them feel dumb for not knowing. It’s like they stumble on a language feature they don’t use/know, and declare the language bloated or complex. They fail to realize that taste is very necessary to use this language well, and they simply haven’t developed it yet. It’s far more prudent to move on and just not use said feature.
Rapid adoption of languages tends to correlate with a sort of ineffable “this gives me super powers” feeling. For example: I remember when node.js was announced; I poked at it a bit and found it was V8 + an implementation of the reactor pattern. But plenty of people knew nothing about the reactor pattern, and JS was red hot at the time. By marrying the two, people could get the benefits of the reactor pattern without having to implement it, and use a language they already knew.
At the same time, languages don’t seem to get much attention when they fix fundamental blind spots such as overuse of null. Early adopters routinely see themselves as people who don’t make ‘those’ kinds of mistakes.
The most harmful part of all of this is an increasingly short-sighted mentality of the nature of tools. The test drive is everything. We don’t want to be challenged, we don’t want depth, we want the language to be tailored exactly for what we’re doing this instant. And we don’t want it to change any more, except for adding that one feature.
Programmers lie…just like everyone else. The worst part is we often ship them what they say they want rather than digging into what they actually need.
Up until now, the lack of generics has been the only thing preventing me from using Go.
When I started having a look at Go, I thought that I will miss generics. After having programmed in Go for a while, I don’t miss them so much in practice.
This is an interesting comment about implementing generics in Go:
for me it was more a choice of ocaml versus d (i picked ocaml, but i am keeping an interested eye on d). i would love to see more head-to-head comparisons between the two; they seem like they’re aiming for roughly the same sort of applications.
Can you talk about your application and what features made you choose OCaml?
it’s a small crossword editor that i’ve been using to explore various combinations of language and toolkit to see which one is nicest to develop and maintain a desktop application in. so far i’ve prototyped it in ruby/shoes (too slow), chicken/iup (not complete enough at the time, though there are now a great set of iup bindings), clojure/swing (got a good long way, but i got sick of both swing and the jvm) and common lisp/eql (eql is nice but inactive, and i’m not a huge fan of common lisp). i’d like to give some more static language a try, and i wanted to use qt if possible.
i’ve not really chosen ocaml insofar as i haven’t started (re)writing the app yet, but the frontrunners this time around were d, ocaml and f#, and i’ve already been using ocaml for a bunch of other stuff. what finally decided me against d was that qtd does not seem very well documented, also no one seems to be using it. qt/ocaml isn’t the most mature project either, but at least i know there are solid gtk bindings to fall back on, and i already know ocaml (though i’ve not done much gui code in it)
yes, i was clearly biased towards ocaml because i already knew the language, but
i really think a good gui development experience could be a killer application for d, especially if you have the option to end up with a single, static binary (something go did beautifully).