Yeah, I wrote some of the worst code of my entire life in the two years after reading Clean Code. I kept adding more and more functions into my code at different layers, and it made my code into a mess with way too much indirection for relatively simple problems. When reading back through the code months later, I would be hard-pressed to figure out where modifications needed to be made, and had to walk back a lot of my design decisions. Pretty embarrassing.
It reached a breaking point when I went back to a previous project to ship something for a client months later, and every single file I had written had been thrown out because the senior developer on the project was sick of having to deal with it. I felt humiliated and spiteful, and then I grew from the experience a bunch.
Lately I’ve been following the idea from Casey Muratori’s Semantic Compression post a bit, mainly this little tidbit:
Like a good compressor, I don’t reuse anything until I have at least two instances of it occurring. Many programmers don’t understand how important this is, and try to write “reusable” code right off the bat, but that is probably one of the biggest mistakes you can make. My mantra is, “make your code usable before you try to make it reusable”.’
It seems obvious, but it’s been helpful for me to avoid abstractions until I have a better sense of where the code is going (and usually only two instances of some code is too small a number for that).
Making functions when you have obvious inputs and outputs can be nice too, although it can be more difficult when writing graphics code, which tends to be pretty stateful (part of the motivation for Carmack’s post in defense of inlined code).
When reading back through the code months later, I would be hard-pressed to figure out where modifications needed to be made, and had to walk back a lot of my design decisions. Pretty embarrassing.
This is such a good point. Code written in that style isn’t even read only it’s write only. All you can ever seem to do to change how any of it works is to delete swathes of code and write more because all the functionality is implemented in the form of class relationships rather than simple logic.
Like a good compressor, I don’t reuse anything until I have at least two instances of it occurring.
Very much agree with this too.
make your code usable before you try to make it reusable
Thanks for the Semantic Compression recommendation, and this quote in particular. I think I’ll get a lot of mileage out of this concept.
I had a similar experience. Clean code rules are great to get you thinking about this subject, and actually get you to start experimenting with making maintainable code. But your first experiments will always end up very bad, no matter how many books you read on the subject. Only experience with your old code will teach you how to write better code.
Wow, I came to the same conclusion of Semantic Compression just naturally. After awhile you realize how wasteful you’ve been codewise and adapt a better way of thinking.
Super nice video, but I expected he would talk about (or debunk) the legend of writing Crash in some custom Lisp generating highly efficient machine code and allocating sectors on the CD by hand to optimize loading.
Crash also had a lisp: https://all-things-andy-gavin.com/2011/03/12/making-crash-bandicoot-gool-part-9/
That post series is one my favourite devdiaries/postmortems I have read. I always link that for people interested about old consoles or games.
This is a great example of how programmers dress up arbitrary opinions to make them look like they were rationally derived. The cognitive dissonance that led to writing that C is not supported, but the near-complete superset C++ is supported, or that rust should not be supported because end developers do not use rust must have been nearly overwhelming.
The cognitive dissonance that led to writing that C is not supported
Huh? The doc clearly says that C is approved both for end development and for use in the source tree. It’s not “recommended”, because apparently they’d rather use C++, but that’s not the same as not supporting it.
I read it as, “the biggest problem with Rust is that we didn’t make it, so how good could it possibly be?”
We interviewed a guy the other day who told me that he knows “insider sources” in Google and that they are abandoning Go for all new project starts and moving entirely to Rust.
EDIT: to be clear, I didn’t believe this guy. I was relating the story because it would be funny that they don’t support Rust for Fuchsia but supposedly are abandoning Go in favor of it…
What do you hypothesize his motivation was for such a weird fabrication? Was he proseletizing rust in an interview? Were you hiring for rust devs and this just.. came out in the interview as a point of conversational commentary?
Just curious. Seems a weird thing for a candidate to raise during an interview!
I think he was trying to sound smart and “in the know.” He brought it up when we mentioned that some of our code is in Go, and mentioned that some of his personal projects were in Rust.
Like I said above, we ended up not extending him an offer, for a lot of reasons.
Hopefully not. I don’t want to deal with the influx of twenty-somethings fresh from university who believe starting at Google instantly added 20 points to their (already enlarged) IQ.
It makes sense to support C++ and not C. C++ makes it easier to write safer code (RAII) and provides a higher abstraction level (templates).
Those are useful features, but that’s the subset of C++ that isn’t “C functions, compiled with the C++ compiler, and avoiding undecorated casts”. If I’m allowed to use C++ then I’m allowed to use C.
My original point remains: “it makes sense” is not “here are our axioms, and here are the conclusions derived from those axioms”. Programming language choice is usually - and rightly - a popularity contest based on what the team finds interesting, and I see no difference here when I look beyond the dressing.
I compare this to an experience I had at FB, where the systems developers chose to question their language choice (mostly C++, some Java). Arguments were put forward in favour of and against Rust, D, Go, and maybe some other things that I’ve forgotten. Then the popularity contest was run in the form of a vote, and C++ won, so they decided to double down on C++. Not because [list of bullet points], but because it won the popular vote.
Idiomatic C++ is qualitatively different from C, though the language is technically a superset. That in itself is enough to have a group disallow C but allow C++.
You are right that it’s an imprecise process. But, popularity is a feature in itself. A popular language has more developers, more help information online, and more libraries. Why wouldn’t popularity be an important feature?
I am so slow at writing design docs at work. I get paralyzed by indecision. I want to become faster!
Deadlines make you faster. Put an artificial deadline to yourself, couple it with a money donation via stickk and you may be there.
What @adamo said. Talk to people. Talk to a rubber duck if there are no people around (not as good but can work in a pinch). I’ve unblocked myself countless times by talking to my wife. She’s totally non technical but is a keen listener and even keener question asker :)
Also, break the thing you’re stuck on down into finer and finer chunks. What DO you know? File the unknowns down until they’re the tiniest possible size they can be. Sometimes this can really help clarify your situation.
Do people really cheat with their javascript formatting like that? That seems like it would be very misleading to read and I would not encourage it in any language.
Mostly no. At least not now - e.g. it tends to blow up as soon as a source code formatter goes anywhere near it. Alternatives are more widely encouraged in the community.
The async/await thing is the simplest one for flattening code:
async function doTheNeedful() {
const resultA = await a();
const resultB = await b(resultA);
try {
await c(resultB);
} catch(error) {
window.alert("Nooo");
}
}
The promises API often lets you flatten code to a fixed indentation depth (usually 2 or 3). This isn’t as nice as async because sometimes the clearest way to lay out code doesn’t follow the layout that’s easiest to write in with manual use of promises.
function doTheNeedful2() {
return a()
.then(function(resultA) {
return b(resultA);
})
.then(function(resultB) {
return c(resultB)
.catch(function(error) {
window.alert("Nooo");
});
});
}
// oh no it crept over to the right a bit. maybe try doing something different?
function doTheNeedful3() {
function afterA(resultA) { return b(resultA).then(afterB); }
function afterB(resultB) { return c(resultB).catch(catchC); }
function catchC(error) { window.alert("Nooo"); }
return a().then(afterA);
}
// but now the order of statements is all weird and meh - maybe we should've just accepted the extra indentation levels in the earlier version
There exist other APIs than promises, such as the old “async” library you’ll find on npm https://caolan.github.io/async/v3/ which give you a library of higher order functions for composing callback-based asynchronous functions together. I do not personally want to use those.
I just spent the past hour debugging an “error: unreported exception FooException; must be caught or declared to be thrown” message coming from a test.
I thought the exception was being thrown at runtime and was trying to figure out what faulty logic was throwing it.
It’s a compile error.
(this is really cool!)
I love it, this is great! The UI is lovely, and thanks for the keyboard-only workflow :D
It took me a while to see the command bar at the bottom, so I was confused when I first launched the tool. I think a small animation that draws the user’s attention to the command bar when they launch for the first time would be useful.
Thank you so much! Yes, the command bar is a bit confusing, animation might be a good idea, I’ll try adding it!
These days it seems woefully quaint to have this discussion without factoring in how browsers work, but as far as I know you don’t even have the luxury of being able to refer to a book when it comes to that. (the article is from 2012)
Yeah one thing i see a lot is toy operating systems. But somehow nobody dares to try a toy browser :)
I looked and found only a couple, probably 2 orders of magnitude less than OSes:
https://limpet.net/mbrubeck/2014/08/08/toy-layout-engine-1.html
https://www.udacity.com/course/programming-languages--cs262
Although I have to update that to say the recent OS project here is really impressive and includes some HTML rendering:
https://lobste.rs/s/2pnnel/serenityos_from_zero_html_year
If anyone knows of any more, let me know!
This is an interesting HTML5 subset that looks like it aims to be production quality:
https://cobalt.googlesource.com/cobalt/+/master/src/README.md
[Disclaimer: I’ve worked on Cobalt code and with the Cobalt team briefly. However I didn’t work with it that much, so I’m mostly clueless.]
I think Cobalt (and its team) is really awesome. It is used in production by YouTube TV which runs on a lot of smart TVs and other related devices, so definitely production-level. I am not sure how effective it is as a learning tool in general for browsers given that it is production code and not a toy, but I did get a sense of how a browser could be structured from working with it.
https://cobalt.foo/ has some useful additional context, and I found it interesting to poke through the following directories when working with it:
dom/: Various data models representing how the browser thinks about the DOM. HTMLElement is nice to examine to get a sense of what stuff is common between different element types, e.g. the layout system.
rasterizer/: Various implementations for rendering out the nodes from the render_tree. e.g. here’s the EGL render_tree_node_visitor, it’s responsible for drawing a lot of stuff.
script/ contains the interface for the Javascript engine and two implementations.
cssom/: The CSS object model tree and various node types.
I also thought that browsing Starboard’s code (their way of porting Cobalt to multiple platforms) was super insightful for understanding different platform APIs. I’ve found it hard to find good documentation on how to write production-ready code for window creation etc., so this code can be nice to look at.
Anyways mostly talking out of my bum here, please take this with a grain of salt.
Thanks for the pointers! It looks pretty awesome, although yeah maybe a bit big to learn from. I would like for someone to take this and make some sort of alternative browser, sort of like like v8 was taken for node.js!
WeasyPrint is a production work, not a toy, but it is a relatively small, self-contained, and rather complete CSS layout engine completely written in Python.
Of note, Servo’s CSS parser (hence now Firefox’s CSS parser!) was written by one of main developers of WeasyPrint. This should give an idea of its quality, despite its lack of resource.
seconding weasyprint. It’s maintainers are also very good stewards of the project, and convinced me to throw a couple bucks of month their way.
It’s also probably the most readable layout code you’ll find out there, especially compared to the very line-noise-y C++ you’ll find in browser layout (if only from the accumulation of cruft over the years).
Wow cool! It’s a lot smaller than I would have thought, even taking into account the html5lib dependency, which is also small. I thought browser rendering engines were of similar size to a JS engine, e.g. 500K or 1M lines of code.
My experience is that Python can be 5-8x shorter than C++, which would still be ~100K lines.
I think supporting only the modern stuff saves a lot of space. I think all browsers have to include XHTML parsers which is basically a dead-end.
~/git/other/WeasyPrint$ find . -name '*.py'|grep -v test | xargs wc -l|sort -n
968 ./weasyprint/css/__init__.py
1130 ./weasyprint/draw.py
1285 ./weasyprint/text.py
1286 ./weasyprint/layout/inlines.py
1421 ./weasyprint/css/validation/properties.py
1579 ./weasyprint/formatting_structure/build.py
21944 total
...
917 ./html5lib/_inputstream.py
1721 ./html5lib/_tokenizer.py
2791 ./html5lib/html5parser.py
2947 ./html5lib/constants.py
16204 total
The largest saving is from not supporting dynamic HTML. Making C++ scriptable from JavaScript takes a lot of code: usually “script” part is as large as “layout” part. WeasyPrint also does not implement any incremental layout or dirty tracking because it is not interactive and that also simplifies layout a lot.
Well, your first link recognizes WebWhirr as a fellow toy engine (and I guess NetSurf and Links are not «toy» enough?)
On the other hand, by now I actually consume the majority of WWWeb content by scraping HTML files, parsing them and running some (custom) code to convert them to text I read in Vim. I don’t yet have any plan for script support, though.
One day, I read a book called Daemon (by Daniel Suarez). Its year 2010 and I begin to think about the infrastructure and architecture of the Daemon.
I like personal wiki software, and I am trying to reconcile something like three to five places where I write my notes into one place in computer shared across all devices I own at that time. I have this beautiful idea; lets make decentralized infrastructure that will allow you to work with structured data. It should also be a personal wiki. Tool which you can use to write your personal notes, use it to generate your homepage, but also to interact with all feeds you use and generate (everything from jabber, irc to last.fm feed and catalog of your movie ratings on imdb), index them and work with them with all of your programs. Websites? Pfffff. Just write an API for them and mirror them in this tool, so you can build custom views on the comment streams and whatnot! It will be glorious.
So I start to learn Rebol, because it seems to be really ideal for this kind of tool. I read everything there is about Rebol. I don’t mean it figuratively, but literally. I’ve read all books, all articles I could find and most of the discussions in systematic manner during a period of year or two. Then I decide that whole tooling is kinda immature and strange and broken and forgotten. But I learn that Rebol is like a lisp with different syntax. Hmm, lisp you say?
So I learn lisp. I even write my own interpreter in D language. I really like metacircular evaluators and macros and all kind of strange stuff you can get with lisp. I read about Genera and other live environments, which leads me to Smalltalk.
Smalltalk! This seems fine. And I really like the environment. So I order a bunch of books about Smalltalk and start using Pharo. But after a year or so, I realize that class based programming is really not the best solution for my problem.
In 2015/9, I find Self. So I order a PDF manual printed on demand, and I find that it contains a lot of bugs. So I fix them, and order it again. It still contains a lot of bugs, so I fix them again and order another copy. Bonus: I now know Self, because I’ve read a handbook three times in a row. I have a whole personal wiki dedicated to just Self, what I don’t like, what I do, tips and tricks. I read almost all the papers about Self, and all articles (I have a page in wiki just dedicated to articles and I read them one by one), and then whole mail conference.
When I try to write my first programs, I find out that Self is really broken. Unicode doesn’t work, environment is fragile and it is written in C++. I don’t like C++. How hard it would be to rewrite it from scratch? I really don’t want to do that, but how hard would it be to just implement frontend layer and use some existing backend virtual machine, like JVM, or Lua?
One day I read about rpython. So I naturally try to write Self parser in it. And then AST representation. And then compiler. And VM. And suddenly, I am really doing my own reimplementation. And I am writing articles about Self, and about writing your own language, and having this strange conversations with people about structured operating systems and nature of the true object orientation and religious flame wars about languages and their object models.
Suddenly its 2019, my language almost works, and there is this publicly accessible course about GraalVM in the city where I live. So I take some time from work and go there, and the teacher really likes my articles about programming your own language and connects me with some people from Oracle, who worked on the original Self. Sadly, I have no time to push this towards something, because I am trying to bootstrap hybrid of the first version of my wiki in PyQT and tinySelf (thats how I call my little language).
Meanwhile, I still use CherryTree and also notion.so and my distributed wiki is still mostly idea and bunch concept images and pages and mindmaps, more than anything working. But I have dreams. Sometimes they are so vivid, that I am really mad that they are just dreams. And I also still have a lot of frustration from the technology around me.
So I keep working, and shaving the Yak. He doesn’t know it yet, but I am determined to shave him and collapse him back to depth 1. He is restless and kicks around itself and fights me, but I’ll keep shaving it, with the upmost rigor and determination. I will do so, until it is bald and naked. I will conquer all his yakiness, and use it for my own purposes, or die trying.
I guess that depends on the type of the person you are (I was still a teenager back then) and also on time. In 2010, this was really visionary, specifically the idea that you can control physical items via virtual interfaces you see in AR. Of course, there were people who talked about this, but this was the first time I’ve read about it in some consistent and well thought manner. This was before Oculus and google glass and magic leaps, hololens and all kind of this technology we know today.
BTW: I think that most interesting description of the Daemon technology was in the sequel “Freedom”.
The title misled me, I was a bit surprised by how much this focuses on the dev environment as opposed to software development best practices with Clojure / transitioning to Clojure from other languages. But I guess there’s already been a lot written about that.
I mentioned above that I deliberately kept it focused on the environment setup because I hadn’t found many decent posts about using Clojure with vim that I found useful, so I wanted to leave something that may be useful for someone else.
But yeah, thinking about it now, the title should’ve been something else.
I’ll do a follow up later on where I dive deeper into my experiences with Clojure and functional programming so far, but I wanted to get a bit more time under my belt before doing that because I don’t feel like 3 months is enough time to really gauge that beyond a superficial level.
Disclaimer: I work at Google (on an unrelated product team)
It was not a false flag. I feel upset reading this. Articles like this don’t help me talk to others as a developer working at a corporation, they just make me want to shut up so I don’t get quoted like Russ.
I’m sorry that the article came across this way. It was not my intention to malign the good work of Russ or the rest of the Go team. I think they do good work and I’m a big fan of Go (I even wrote a book about it). In fact I think they usually make better decisions than the community. Vgo was just better than dep, which has only become more apparent the more I’ve used it. I think its great that he was able to make the tough call and do what he did, despite the social ramifications. Go would’ve been worse off if he hadn’t.
I quoted Russ because it was illustrative of an issue in the Go community about trust and ownership. It’s not any individual’s fault, and I actually think the issue is a bit overblown. The Go team has nearly impossible demands placed upon them and I think they do a good job of managing – far better than I could in a similar circumstance. Nevertheless, even if not wholly deserved, the issue is a real one, and this latest issue with the try proposal really strained the community. Had it gone through I suspect there may have been long-time developers who would’ve given up on the language, or at least pulled back from future involvement, distraught from a project that seemed to have taken a very different turn from what they signed up for. Is that fair? Probably not. But the tension was real regardless.
The article was meant in jest. Of course the proposal wasn’t a false flag. I suppose the last comment may not have made that clear.
I think the objective outcome here is great. I don’t think try
would’ve been a good addition to the language. Like a lot of Go developers it didn’t strike me as particularly go-like, and it left a lot of us scratching our heads. So for things to pan out the way they did was a surprisingly good outcome, so good that you wonder if it wasn’t planned.
Happy accidents do exist, but I wanted to just be a bit playful with an absurd conspiracy theory. I see now how this could be misinterpreted due to sloppy writing. The fault is entirely my own and I apologize.
Oh, I’m sorry for my emotional reaction :/ Thanks so much for the explanation, I get where you’re coming from now.
I think I’m too attached emotionally and shouldn’t comment on these discussions. They’re 100% important conversations to have. Lobsters has made me think a lot about corporate teams working on public technology, their relationship with the community and the risks that dissonance between company goals and community goals can pose – I absolutely want to read more and think more about it.
I am afraid especially the final disclaimer may be much too tiny, both in font size and in length/depth. Even seeing it, I still absolutely didn’t grasp what you really meant by it, until I read your comment above. In other words, I’d say it isn’t counter-balancing the initial part of the article well enough; to me, it even seems half-hearted enough, that I took it more like a “I don’t really mean it, hee hee, right? wink, wink, nudge, nudge, or do I? ;D”
Please remember the old truth, that on the Internet, it’s not possible for the reader to know if the writer is sarcastic, or if they really mean it, while the reader is often convinced they know the answer. It’s unfortunate, but I’ve seen it introduce problems much too many times. To the extent that I’ve even seen it used as an explicit and effective trolling tactic, to seed conflict in a community.
Vgo was just better than dep, which has only become more apparent the more I’ve used it. I think its great that he was able to make the tough call and do what he did, despite the social ramifications.
So true. I’ve been thinking this during the whole drama. Happy to read it here.
Had it gone through I suspect there may have been long-time developers who would’ve given up on the language
This goes both ways. I agree that maybe try
was not right choice, but I could consider giving up on the language if nothing is done about error handling verbosity.
PS: Nice and funny article by the way ;-)
If this ever happens to you, then hopefully the take away is to apply the feedback of your peers so that they can help you think about your actions instead of inspiring you to never communicate again. The initial problem that stemmed this now paranoid group of Rust fans was a lack of communication from Russ, so not communicating is the actual problem which caused the issue in the first place.
There’s also always the option of not working for big evil corporations. :)
Wow, this really helped me understand that I have some seriously bad biases against congresspeople’s technical understanding.
Maybe I’m just too biased, but something about his phrasing made me think that he had these questions given to him, with the intent to sound technical? It’s just an impression, and I don’t know anything really about the person, but nevertheless I’m surprised that I seem to be the only one mentioning this (even if it’s wrong)?
I’m pretty sure a good campaign manager will find ways to insert topical questions in congressional hearings. They’re one of the few ways a congressperson can get known outside their districts.
This is not the most hot-button issue, so perhaps it’s better to give him the benefit of the doubt? Sound bites from this hearing are unlikely to make it to the nightly news. The congressman was an intelligence officer for a while, so he may genuinely have that experience.
Here’s the talk video from JsConfEU 2019: https://www.youtube.com/watch?v=MO8hZlgK5zc
Go has community contributions but it is not a community project. It is Google’s project. This is an unarguable thing, whether you consider it to be good or bad, and it has effects that we need to accept. For example, if you want some significant thing to be accepted into Go, working to build consensus in the community is far less important than persuading the Go core team.
This is, essentially, not that different from how most projects work. Even projects which have some sort of “community governance” seldom have voting rounds where everyone can vote. Only contributors/core members can vote.
Accepting all PRs is clearly not a good idea, so you need to do some gatekeeping. The biggest source of disagreement seems to be on exactly how much gatekeep is needed. The Go authors have pretty clear visions on what should and should not be in the language, and gatekeep a bit more than some other languages. Putting stuff in the language can also be problematic (see: Python’s :=
PEP drama).
On the specific point of generics (sigh, again), I think the premise of that tweet is wrong. It suggests that the overwhelming majority of the community is just screaming for generics, and the Go Overlords stubbornly keep say “no”. That’s not really how it is. In the most recent Go survey 7% gave “lack of generics” as a response to “what is the biggest challenge you face today?” which is hardly overwhelming (although it’s not a clear “would you prefer to see generics in Go”, so not a complete answer).
Anecdotally, I know many Go programmers who are skeptical or even outright hostile to the idea of adding generics to the language, although there are obviously also many Go programmers who would like to see generics. Anecdotally, I’ve also noticed that preference for generics seems negatively correlated to the amount of experience people have with Go. The more experience: the less preference for generics. I’ve seen people with a C# or Java background join our company and strongly opine that “Go needs generics, how could it not have them?!”, and then nuance or even outright change their opinion over the months/years as they become more familiar with the language and why the decisions were made.
The author of that tweet claimed in the Reddit thread:
I am suggesting that implementation of generics will be easy . All am suggesting is we (community) should implement prototype or so proof of concept and present it to committers .
Which seems to suggest that this person is not very informed on the topic. The Go authors have been writing and considering generics for at least 10 years, and thus far haven’t found an approach everyone likes. You can reasonably agree or disagree with that, but coming in with “oh it’s easy, you can just do it” is rather misinformed.
The Elm guy had a good presentation a while ago (“The Hard Parts of Open Source”) where he shared some of his experiences dealing with the Elm community, and one of the patterns is people jumping in on discussions with “why don’t you just do […]? It’s easy!” Most top-of-the-head suggestions to complex problems you can type up in 5 minutes have quite likely been considered by the project’s authors. They are not blubbering idiots, and chances are you are not genius-level smart either.
This is also the problem with a lot of the “conversation” surrounding generics in Go. People like this guy jump in, haven’t seem to informed themselves about anything, and shout “why don’t you just …?!”
Sidenote: I stopped commenting on anything Go-related on /r/programming as there are a few super-active toxic assholes who will grasp at anything to bitch about Go (even when the thread isn’t about Go: “at least it’s not as bad as Go, which [. rant about Go ..]”. It’s … tiresome.
I think the premise of that tweet is wrong. It suggests that the overwhelming majority of the community is just screaming for generics, and the Go Overlords stubbornly keep say “no”. That’s not really how it is.
Be wary of selection bias here: if someone really thought generics was important they wouldn’t be in your community to be asked the question. If the goals of the language are to serve the people already using it, thats a fine thing, but if it’s to grow then that’s harder to poll for.
Every community is biased in that sense. People who dislike significant whitespace aren’t in the Python community, people who dislike complex syntax aren’t in the Perl community, etc.
I don’t think the Go team should poll what random programmers who are not part of the Go community think. I don’t even know how that would work, and I don’t think it’s desirable as the chances of encountering informed opinions will be lower.
Anecdotally, I’ve also noticed that preference for generics seems negatively correlated to the amount of experience people have with Go. The more experience: the less preference for generics.
This part was also concerning to me. If Go is “blub”, then of course people who are more used to not having generics wouldn’t necessarily think generics are preferential.
I don’t think this fits the “blub” model. People who have only used “blub” don’t understand what they are missing. But here we are talking about people who have got experience with generics: the more experience with Go they gain, the more they understand why Go does not have them.
There is the old adage of being unable to please everybody.
It’s better to cater to the crowd you have than the whims of random people.
In the most recent Go survey 7% gave “lack of generics” as a response to “what is the biggest challenge you face today?” which is hardly overwhelming (although it’s not a clear “would you prefer to see generics in Go”, so not a complete answer).
I think it’s also worth mentioning that “lack of generics” is the third biggest challenge in that survey (after “package management” and “differences from familiar language”).
“I am suggesting that implementation of generics will be easy”
Do you have a link to this comment? The way it’s phrased makes me think that it’s a typo and they meant to say “I am not suggesting that implementation of generics will be easy”.
For outrageously inflammatory post titles like this one, I skip straight to the Lobsters top comment. Cunningham’s Law hasn’t failed me yet.
Reminder that Go has generics for a small set of built-in data types, just not user-defined generics. Let’s be explicit: the language already has generic types in its syntax, e.g.:
[n]int
[]int
map[string]int
It’s not a great stretch from this to something like tree[int]
. Given this, the fact that the language designers have put it off for so long, and so much of the community is antagonistic towards it (where does that antagonism come from–where did people pick up on it?)–it’s not a big stretch to infer that they simply don’t want Go to have user-defined generics.
where does that antagonism come from–where did people pick up on it?
I picked it up in the C++ community. From build times to breakage to complexity, I have repeatedly implemented external generics (code generation) solutions that were simpler to manage and gave far better results for my projects than using templates.
Those are not really generic types but variable-length types. Not exactly the same.
it’s not a big stretch to infer that they simply don’t want Go to have user-defined generics.
The catch with your inferration (is that a word? Is now I guess) is that the Go authors have explicitly stated otherwise many times over the years, and have written a number of proposals over the years (the Go 2 Contracts proposal lists them). There have also been a number of posts on the Go issue tracker, Reddit, HN, etc. by Go authors stating “we’re not against generics, we’re just not sure how to add them”.
Those are not really generic types
If you take a look at your linked design document, e.g. maps are repeatedly used as generic types in the proposed Go polymorphic programming design.
Go authors have explicitly stated otherwise many times over the years, and have written a number of proposals over the years
Good point, before the generics design document was published I suppose my inference would be more credible. Now I guess they are serious about generics–which seems unfortunate for the people you mentioned who vehemently hate them :-)
I got the same email as that HN page, which seems “official”. It’s also here: https://success.docker.com/article/docker-hub-user-notification
News is best posted elsewhere, and I seriously doubt very many ‘lobsters’ who were affected first found out about it here since emails were sent out.
This reminds me of mandoc’s conversion away from sqlite.
This deck was a great read. I feel like it might be the opposite takeaway, although it’s definitely relevant.
In mandoc’s case they shifted from reusing a heavier tool (SQLite) to writing a lighter one that was better-suited for their problem (less maintenance from audits).
In this article’s case they’re advocating to be careful when favoring a lighter tool over a heavier one, because you might end up with a tool that’s stupid light (useless and fragile).