1. 57

    I don’t want in the least to be a jerk about this piece. I can relate to it, inasmuch as my last effort to find a new job was for the most part a miserable and disheartening experience, and I don’t personally have anything like the will to engage in good faith with the kind of bullshit that’s described here.

    With that said, this kind of thing - of which I’ve read countless iterations by now - mostly functions for me as a reminder that, at root, the industry is completely fucked and somehow we’re all playing along. The fundamental work of most “software engineering” is poisonous to the world, the organizations described in these essays are owned by people who fall somewhere on a spectrum between con artists and malevolent oligarchs, and a class of diligent strivers is engaged in competing for a place in the second-order technical/administrative ranks. A status that lets you earn much better money than in most of the rest of the economy while directly helping engineer everybody else into a state of precarity. (A state which, of course, is what awaits you if you exit the predatory scene you’re enabling before extracting enough resources to be insulated from its effects.)

    Doing well in this environment seems to require either a quietly mercenary attitude or a profound capacity for self deception. There is of course a lot of human wreckage along the way. It’s striking to me how seldom I see anyone make the leap from “trying to get these jobs was a bad time full of getting raked over the algorithmic live-coding coals” to “we should burn this entire structure to the ground”.

    1. 14

      To be fair though, software engineering isn’t a homogeneous field, and I’m not sure ‘most’ is poisonous to the world. I have certainly never worked on anything I would consider poisonous, enabling of some kind of evil, or similar. What I do now is targeted at helping people out of bad situations and get themselves more stable.

      If you have some analyses of software engineering across the full spectrum that can back up what you’re claiming, I’d be interested to see them, as what I hear in articles like this just sound like they’re from a bubble I haven’t been near.

      1. 10

        what I hear in articles like this just sound like they’re from a bubble I haven’t been near

        Those are exactly my feelings. For the last 10 years in the field I haven’t been anywhere close to the environment described in pieces like this, and yet every time they pop up people (mostly on the orange site though) always find it relatable and comment with their similar experiences.

        Perhaps it’s just my own bubble, but it seems so alien and bizarre that it seems like an entirely different industry whatsoever. My first job was in a ~300-person company, and that was the last time I had a technical interview, and the last time someone asked for my resume. Everything since then was a “oh, we already know you from somewhere else/seen your code on github, that’ll do, when can you start”.

        All this talk about having a Top Dog Company in your CV, and I’m yet to see anyone even bothering to look at mine. All this bragging about massive salaries if you can get there, and yet my friends at Google get about the same money as I do with me bouncing between small companies and niche startups. I don’t think I’m that unique here either – and it makes me wonder why do people choose to put up with that crap? Multi-stage interviews, GDPR-insulting background checks, and for what? A status symbol, bragging rights? In an industry that is so starved for competent engineers that it’s willing to please you left right and center if you ask for it?

      2.  

        Your comment has generalizations such as

        at root, the industry is completely fucked and somehow we’re all playing along. The fundamental work of most “software engineering” is poisonous

        and your solution (even though I hope metaphorical) is unrealistic and violent:

        “we should burn this entire structure to the ground”.

        I think that comments like that invoke raw emotions which harm constructive discourse. It is very unlikely that the “the industry is completely fucked”. I guess strong language feels good sometimes but may leed to people just expressing more of it and either cherishing their helpless victimhood or go for (hopefully harmless) violence.

      1. 14

        Complaints about interviewing seem rather similar to complaints about open-plan offices. Pretty much everyone agrees it sucks, but it also never seems to change 🤷‍♂️ Indeed, things seem to be getting worse, not better.

        My biggest objection is that you’re often expected to spend many hours or even days on some task before you even know if you have a 1%, 10%, or 90% chance of getting hired, and then you get rejected with something like “code style didn’t conform to your standards”, which suggests they just didn’t like some minor details like variable naming or whatnot.

        I usually just ignore companies that treats my time as some sort of infinitely expendable resource, which doesn’t make job searching easier, but does free up a whole lot of time for more fulfilling activities.

        1.  

          Getting rid of open - plan offices seems doable. Yet, we do not know how to screen for good engineers reliably with our without wasting anyone’s time.

          1.  

            Hire them for an extremely short contract? Bootcamp does that. They pay travel and $1000 to work for them for a week, which doesn’t seem too bad.

            1.  

              How would that work for anybody who’s currently employed?

              1.  

                The last time I interviewed, I was talking to about 10 companies. If the hiring process involved a short contract, they’d have been off my list.

                1.  

                  “Hire a candidate for a week” doesn’t strike me as a solution to, “Screen people without wasting anyone’s time.” At the very least, it forces the candidate to spend a week in what amounts to an extended interview. It seems to me like it’d be far more time-consuming for the existing team as well, what with coming up with a steady stream of short projects that require little or no onboarding, working with the candidate for the week instead of whatever else they’d otherwise be working on, evaluating the candidate’s work, and so on.

                  I’m not arguing that it’s a bad idea in general, just that I suspect it isn’t a good way to reduce wasted time.

            1. 11

              That is one area in which Java has impressed me. Not sure about the last versions but if you refrained from internal APIs, your program wouldn’t break with a new version.

              That was _not _ true for C++ which was an older language in 2005 when I had to make sure that C++ code would work with newer Windows/AIX versions. I think that the priorities of the language designers matter a lot.

              Go seems relatively conservative with only slow language and library evolution - so to an outsider line me, it looks relatively time safe.

              Rust changes a lot but is committed to keep old code working. But idiomatic rust and the library ecosystem is still evolving rapidly. Therefore, I think your code might have a fair chance to keep working in 10 years but only with ancient library versions.

              1. 2

                I just incorporated C++ code I wrote 18 years ago into a new project without effort. The C++ core language API does have this property. External libraries on the other hand might not, but this is also true for Java. Java has one advantage here in that the graphics APIs are part of the core language.

                1. 2

                  Go has an extreme compatibility guarantee. Rusts is much weaker - compiling older rust code with a new toolchain can require quite a bit of fiddling (much less once you know the language well).

                  1. 4

                    I’m not sure why you say it’s weaker for Rust. This is the point of their “editions”. https://doc.rust-lang.org/edition-guide/editions/index.html

                    Software authors have to opt-in to breaking compatibility.

                    1. 2

                      I say that because I’ve tried to get a rust program written 18 months ago to compile, and had to fight all kinds of mess.

                      This is largely because in order to target the microcontroller I was using, you needed to run nightly. Original author didn’t pin the nightly version they were using, though.

                      1. 2

                        I think you should have mentioned the circumstances in the initial comment.

                        That said, go as a language does not evolve much and that makes it a better platform right now if you are change averse.

                        1. 1

                          Go doesn’t have a “this may (will) be broken in 6 months” mode; rust does. That’s the weaker guarantee.

                          Personally I like rust better than go, but that wasn’t the question being asked.

                          1. 1

                            Sorry for the “should”. I still think that misrepresents stable rust. Probably there is some misunderstanding between us. Or I was very lucky. So you think that it is common that your code breaks with a new stable update? This happened once years ago to me with an obscure crate because of a fixed compiler bug and the language team had offered a fix to me (!!!) as a pull request upfront.

                            For stable rust, they strive for high compatibility. As with go, they may fix for safety reasons and some others. Otherwise, the language itself has strong stability guarantees and the language team had upheld those well as far as I know.

                            For libraries, it’s different. The ecosystem is evolving.

                            1. 1

                              As an outsider trying to get existing rust code to run, there isn’t an obvious distinction.

                              The existence of stuff that breaks under the “rust” name is significant.

                              Possibly worse because for a long time nightly was required to target microcontrollers, so lots of things used unstable features.

                              1.  

                                Which distinction?

                                Between the language and the ecosystem? => I somewhat agree. The language being stable is an important ingredient in getting the ecosystem to mature, though.

                                Between stable and nightly? => well, using nightly screems, “I care more about some fancy new experimental feature than about stability.

                                1.  

                                  Between stable and nightly? => well, using nightly screems, “I care more about some fancy new experimental feature than about stability.

                                  As an outsider, I come to rust when there’s an existing project I’d like to get running.

                                  In the rust projects I’ve looked at, the ratio of ‘needs nightly’ to ‘runs on stable’ is about 50/50.

                                  To me, that means that the stability of rust is only as good as the stability of nightly. That would be different if I were writing my own rust, but I’m not.

                                  As a person who wants to compile software written in rust: It’s quite likely that the software doesn’t compile without hacking it.

                1. 1

                  That surprised me. I thought I’d remembered that the difference is surprisingly small.

                  It’s weird that the article doesn’t mention the compiler.

                  1. 2

                    It’s more a question of runtime and ABI than of compiler. If you compile to linux/ELF and your runtime uses libunwind there isn’t that much the compiler can do to affect the performance.

                    Many compiler people talk as though that the performance of exceptions is the performance of code when no exception in thrown. Look at the phrasing about zero-cost exceptions, for example. (BTW, zero-cost exceptions either have a small cost or none, depending on how you count. It’s quite fascinating if you’re fascinated by that sort of thing.)

                  1. 8

                    @nmattia: lots of deserved praise for niv there ;)

                    1. 3

                      I was only surprised by point 3.

                      What is more, you will notice these quirks while programming which is a good time to notice quirks.

                      Giving its aim to type real-world TypeScript, I think it is a very well designed language.

                      1. 7

                        Did anyone else laugh at “This makes Nix packages a kind of blockchain.”?

                        (I assume that it was meant as a joke)

                        1. 9

                          I intentionally hide jokes like that in my more ranty posts, and you are the first one to mention it. You win the prize of knowing you are the first to report it. Congrats!

                          1. 2

                            NB: Noticed it but didn’t find it funny. Blockchains are serious business!11

                        1. 16

                          It seems the author has a different concept of “clean”. To me, “clean” is “less complicated” rather than “less duplicated”.

                          1. 3

                            I think that “clean” is not a good word to use because it is subjective and ambiguous. It is better to ask

                            • “Do you understand the code?” => “Can I make it easier to understand?”
                            • “Is it correct? Can you determine easily that it is correct?” => “Can I make it easier to verify?”
                            • “Is it fast enough?”…
                          1. 6

                            I like the article resourcing for the word “defunctionalize”.

                            This pattern indeed comes up all the time and I did not have a word for it.

                            1. 5

                              For those of you also puzzling over this message: it’s a meet up organized by a computer hobbyist club (Chaos Computer Club) in a town in Germany. Sounds like fun.

                              1. 11

                                a meet up organized by a computer hobbyist club

                                … with 16000 attendees. :-)

                                1. -1

                                  Ah, not so much fun then.

                                2. 6

                                  rofl, computer hobbyist club.

                                  1. 1

                                    It kinda is. It is somewhat amateurish which adds to its charm. Supported by many volunteers.

                                    1. 4

                                      I think this needs to be put into some context, because of people often meaning different things with hobbyist, amateur and professional. This is not to disagree or to defend anything, but to maybe counter wrong impression. Obviously this is biased by my own impressions.

                                      It’s not professional in the sense that nobody tries to sell you stuff and that people are volunteers. Correct.

                                      Similar to most Python, Ruby, Go, Cloud, JavaScript, Kubernetes, etc. conferences talks in the majority are introduction talks geared towards beginners, especially if you look at the main stages. I think that might be related to them usually having good recording equipment and bigger audiences. I think on average it gives a more professional impression than many clearly professional conferences, when ignoring outliers.

                                      However, it’s not true in the sense that you’ll only find people that only do computers in the context of it being a hobby. You tend to meet a lot of security professionals if you go there. You will find in-depth talks on side stages. You will mostly find highly experienced people, working in the field of security, if you attend the conference and walk around while talks are going. Especially, but not only in the assemblies area or if you volunteer.

                                      As for the club. It’s hard to call it hobbyist based on the work the do (education in schools, political hearings, work for press, etc.). But then again of course that is largely a hobby. It is also correct if you take the fact that the majority of members is hobbyist.

                                      It tends to feel very professional (as in things working, also on large scale and not by chance) on the organizational, size and infrastructure side of things.

                                      A thing that’s not really related to this, but might stand out: Despite the size people largely are nice to each other and oddly for such a size have that family/community feel.

                                      Also the conferences are very political overall. I’d claim this is to be due to them being a hacker conference. So things that are largely part of the views of people one calls hackers extended to areas touching these, focus on freedoms, working together, etc. and then of course gaining contact and mixing with similar political movements. With political I don’t necessarily mean governments of countries, but on the society and human interaction levels. That doesn’t mean there are no conservative people at all, but they obviously are only a fraction of the attendants and shrinking.

                                      1. 1

                                        Fair. I was too lazy to spell it out in this detail on my phone.

                                1. 4

                                  I think that the message is an important one. I only follow the advice because I am forced to at work ;)

                                  That said I think that my perspective on software engineering approaches is wider than of most of my colleagues (most of them very smart, and better at many things). It is quite hard to communicate my knowledge to people with a more narrow perspective, though, and to be honest, much of the insight is hard or impossible to objectify.

                                  1. 4

                                    While I actually like the idea in this article for a simpler basis that heavily emphasizes coproducts/sum types, this is not OK:

                                    Protobuffers were obviously built by amateurs because they offer bad solutions to widely-known and already-solved problems.

                                    Obviously, they were most likely professional software developers disregarding that they might have made a couple of choices that the author disagrees with.

                                    It would also be nice if the author drafted out the consequences of their proposed design.

                                    In particular, they rant about fake compatibility claims. I might be missing something but I think that without default values their design does make evolving messages difficult in simple cases. You cannot enhance a “product” with a field because that would be incompatible. One way around this would be to use a versioned “coproduct” top-level and emit deserialization errors if their are unknown variants.

                                    That somewhat works but I think that there are use cases that can be solved much better. Compatibility depends often on whether the value is read or written. Obviously, REST APIs often use the same message for both - which comes with its own bag of problems. If you are only concerned about maintaining compatibility with message readers, additional fields which are ignored might just befine.

                                    1. 1

                                      From the article :

                                      “If you still want to make sure the operation runs in the background and doesn’t block the current task either, you can simply spawn a regular task and do the blocking work inside it:”

                                      that means that it still pays off to use real async of you do not to block a task by one future.

                                      1. 1

                                        To be clear: potentially blocking the current task might be what you want. (e.g. if 99.9% of all operations fall under the threshold).

                                        1. 1

                                          Sure. But the cost of spawning (without creating a new thread of not blocking) should be small, right? Basically in the order of a couple of allocations.

                                          1. 1

                                            spawning in async-std is exactly one allocation. Yes. spawning is cheap, you can do that often.

                                      1. 2

                                        The article seems to suggest that spawn_blocking is not needed anymore and/or is a noop with the new runtime? Does that mean that there are cases where the new spawn_blocking causes all futures in a task to block on an operation where it didn’t do so before?

                                        https://docs.rs/async-std/1.3.0/async_std/task/fn.spawn_blocking.html

                                        1. 1

                                          To answer my own question, spawn_blocking still spawns a new task.

                                          So, cool, it does not regress and it still makes sense to use around potentially blocking code. If you forget to wrap blocking code, the new runtime will isolate the effects to a single task though AND it is faster if the code doesn’t happen to block (for long).

                                          Cool!

                                          https://github.com/stjepang/async-std/blob/new-scheduler/src/task/spawn_blocking.rs

                                          1. 1

                                            spawn_blocking in its current form might be removed or turned into a hint that the runtime should spin up a thread regardless.

                                        1. 1

                                          Didn’t know about “skip locked” yet. Interesting.

                                          1. 1

                                            We implement something similar with “skip locked” based on this - https://www.holistics.io/blog/how-we-built-a-multi-tenant-job-queue-system-with-postgresql-ruby/.

                                            The difference is that we’re polling postgres for new job instead of using subscribe/notify. But reading the comments on HN, it seems that even with using notify/subscribe you still need to poll on start to check for any unprocessed jobs (in the event where the worker crash when the job was submitted). So I don’t see much value yet to replace the polling.

                                            Our implementation is in python, using multiprocessing (pebble library) for the worker. The main motivation for us to implement it in postgresql (we’re using django-q with sqs before that) is to have rate limit per user, something I found lacking (or non-trivial) in many other job queues like celery.

                                          1. 11

                                            My most recent experience with Windows 10 might be also helpful for someone thinking about it seriously…

                                            I tried most recent Windows 10 release this weekend too, when I built a new workstation. By default it’s going to run a tiny KVM hypervisor with various OSes on top, with one exclusively owning most I/O and resources including GPU – PCIe passtrough is easy today.

                                            So, I’ve put a baremetal Win10 on the NVMe drive from USB flash disk. First stage of installation (WinPE extracting) went fast thanks to PCIe 4.0 disk, but post-install (it’s called OOBE process in Win installer I think) was a mix between registering into a very crappy site and The Sims loading screens. Right after booting I’m forced to click “no”, “god please no”, “nooo” on all the Microsoft sutff being thrown to me, including geolocation, telemetry, “advertising personalization”, some ID account access and so on. Damn, I’m installing an OS, not an Amazon account!

                                            After that phase, there are many screens like “Setting things up”, “We think you’ll like it”, “Preparing stuff”, “Updates are nearby”, “Reticulating splines”, and so on. I’m almost 100% sure that someone from Maxis got hired at Windows development team.

                                            Finally, after a considerable bit of time you’re greeted with the desktop. But not a clean one, as it opens the Edge browser right in front of you with some welcoming corporate crap. Closing it is somewhat tricky, as another SmartScreen popup is being shown.

                                            After all, I activated my copy (I always install offline), did all updates (it’s also not straightforward as they come in rounds instead of everything at once despite the OS build being very recent and fresh) and rebooted it for each step. Fine for now.

                                            Then I put Scoop for software management, WPD for backdoor management and SDI Origin for drivers management, as some of my hardware isn’t available in WHQL database so it can’t pull drivers from Windows Update, where SDI does that from a respected DriverPacks repository. So I installed many utility programs, did WPD stuff, updated drivers and rebooted each time. Great.

                                            After ~2 hours, the OS is ready for being used. The same installation on my $linux_distro takes up to 30 mins, but I don’t take it as a comparison as no one can efficiently automate Windows installation declaratively without paid proprietary tools.

                                            But the most important fact is that installation got broken right after I started to install The Actual Productivity Programs for work. After one of reboots I’ve got welcomed with infinite “spinning dots” bootscreen which didn’t get away and I wasn’t able to debug what’s stopping the boot process as Windows boot is by no means verbose. However, after 3 reboots it got into “diagnostic screen” where you can “repair startup”. It didn’t helped, of course. Even the “reset Windows” (which, apparently, restores system WIM via DISM and retains %USERPROFILE%) didn’t helped - I’ve got a BSOD at 19% progress. Having enough of that, I wiped it off and proceeded to install more reasonable operating systems.

                                            And it’s not the first time I have the same or similar experience with this OS, not even second. I’m not even complaining now about its general inconsistency both in UI and actual behavior, absolutely zero transparency and less management options than previous editions (but PowerShell is a very nice exception which I look forward to, it can’t help the whole ecosystem though). Sometimes I’m starting to think these XP die-hards (still active today) aren’t as wrong as it looks first.

                                            1. 1

                                              Hi,

                                              I am using Windows on my desktop, mostly for “Davinci Resolve” and “Lightroom”. I’d love to put NixOS on it if I can somehow run these programs at full speed. Is that what the PCIe pass through would achieve for me?

                                              Any pointers for such a hypervisor setup that you are using?

                                              1. 1

                                                Don’t forget Wine, you don’t need to virtualize a full Windows install when you can just use Wine.

                                                Resolve is natively available on Linux anyway. (And I would really recommend switching to Darktable from Lightroom :D)

                                                1. 1

                                                  Resolve is “natively” available for an older Red Hat. I once spent half a day trying to get it running on NixOS, without success (but doesn’t mean that it’s impossible). Suspiciously, the minimum hardware requirements are higher on Linux. Worse performance? (and in video editing, it matters)

                                                  I used dark table before light room. It is a little bit quirky (or was) but really quite cool. I loved this wavelet based tool to apply curves on different detail levels. Lightroom for me was way faster (different hardware, probably unfair), though, and I love that I can apply the “native” film simulations of my Fuji XT3. Probably, I could create good enough presets.

                                                  Wine never worked reliably for me (except for StarCraft 1, back in the days). Maybe, I am doing it wrong.

                                                  Overall, the kvm + pass through + Windows appealed to me because I could imagine, once set up, it will just work. And I could still easily use everything good in Linux. Does that make sense?

                                                  1. 1

                                                    Well, Resolve is in AUR..

                                                    Lightroom for me was way faster

                                                    Yeah, DT is slower (on CPU) than most other tools because (AFAIK) DT uses 32-bit floats for everything. Someone really cares about precision I guess :) It practically requires a GPU.

                                                2. 1

                                                  If you equal “full speed” with GPU acceleration then yes. The excellent article about that is on Archlinux wiki: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF - but, if you’re lucky enough and didn’t allocated the GPU in host OS first, just adding it to VM XML in KVM, via virt-manager or CLI (nodedev-detach…) it’ll probably work.

                                                  I think NixOS is an excellent base for such hypervisor, especially when you want to keep it immutable. Just set up a very “core” system, add libvirtd on top of it, arrange some storage and you’re done. Additional gain from using NixOS is that you can probably share expression cache between your host hypervisor OS and “desktop” VM (correct me, NixOS people, if I’m wrong here…).

                                              1. 10

                                                Classic Microsoft to think that the fix for “G is preventing you from shutting down” is to give the correct full name, “GDI+ Window (something.exe) is preventing you from shutting down”. Yeah, that’s so much more user-friendly and actionable.

                                                1. 3

                                                  What are they supposed to do?

                                                  Anyway… do we both agree that something.exe is the part of the system with the bug? Or are you questioning the premise and saying that the OS shouldn’t allow arbitrary applications to prevent Windows from shutting down?

                                                  1. 12

                                                    If you’re going to display a message to the user, and especially if you’re asking a question, you need to make sure the message makes sense to a user. Saying gibberish like “GDI+ Window” is worse than saying nothing. And the “GDI+ Window” isn’t some unknown thing an application made up – it’s part of Windows itself – so there’s no excuse for the random jargon.

                                                    The way this happens is that someone implemented the “ask the user if it’s OK to kill an application” feature in isolation and had no authority or ability to do any better than this, both because the organization does not prioritize fixing this basic level of user model and interaction, and because the system is so complex and fragile that even if it did there wouldn’t be much hope of doing much better.

                                                    That said, in this specific case, it’s sad they don’t at least recognize this is a low-level implementation detail window, and special-case it to reach into the app resources to find a human-readable name, and just ask if you want to kill “Contoso Whatever Application” instead of saying this.

                                                    So, as I say, classic Microsoft. (I worked there, mostly in Windows user experience, for 11 years.)

                                                    1. 3

                                                      I completely agree with you. Reaching into the application for a useful name would have been better. Frankly, it doesn’t make much sense to show the name of an invisible window to the user in any case; it’s invisible for a reason. Hopefully, the application has a useful name where it is going to look.

                                                      The fact that it has something.exe is definitely an improvement on what it was doing before (“G” is so useless even Microsoft’s engineers had no idea what it meant), and there are probably other places in the system where the name of an invisible window shows up, which would probably be useful. So the actual “GDI+” bug was probably still worth fixing.

                                                      1. 2

                                                        The special casing probably makes sense.

                                                        I disagree that showing nothing is better than showing a name that is cryptic to most. A specific name can be searched for, Windows hanging for no apparent reason whatsoever can have many causes.

                                                        The name is also easier to remember and then pattern associate with a problem by the human brain than an error number.

                                                        1. 2

                                                          “Get a searchable name for the problem” is such a low bar. The bar should be set at not having to search. macOS has something roughly like this:

                                                          “Contoso app is preventing system from shutting down. [ Force quit the app ] [ Cancel shutdown ]”

                                                  1. 4

                                                    Nice article since I like the there of everyone makes mistakes.

                                                    interestingly enough, the first mistake resonates most with me in the sense that I created over-complicated code for no good reason before.

                                                    The rest of the mistakes seem so very common and hard to avoid in general.

                                                    1. 1

                                                      The second one, with loop indices, can be “easily avoided” if you’re already using a language that doesn’t make you write easily-screwed-up loops all over the place. Java, for instance, now has for(Foo x: fooList) { ... } rather than having to do arithmetic and bounds checks. (Newer languages, in general, will do this.) Choice of tools matters!

                                                      1. 1

                                                        I agree. you can even do it with modern C++.

                                                        But I think that this is off topic. I don’t think the author was embarrassed that she couldn’t convince Google to use a saner programming language.

                                                        1. 1

                                                          No, I agree. It’s just worth bearing in mind that whenever we make a mistake that is “hard to avoid”, it can be valuable to back up and look at why it was hard to avoid. Sometimes we accept that tradeoff (“these are the tools we have, and lives aren’t at stake”), but it should be done with open eyes.

                                                    1. 2

                                                      TL;DR: They added a mark-and-sweep collector as an older-gen collector, and use the existing semi-space collector for young-gen.

                                                      Maybe I’m getting old and tired, but isn’t that pretty much working at the stone-age of GC design? Just reading this makes me disappointed of what they did before. :-(

                                                      1. 1

                                                        Well, changing the GC in a programming language is quite a bit deal, I think.

                                                        Didn’t golang also go for “stone age” mark & sweep with great success?

                                                        1. 0

                                                          Well, changing the GC in a programming language is quite a bit deal, I think.

                                                          It depends, it gets easier the third or fourth time. :-)

                                                          Didn’t golang also go for “stone age” mark & sweep with great success?

                                                          Possible; achieving “stone age” is probably seen as a “great success” in a pre-stone age language like Go though. :-)

                                                        2. 1

                                                          If true, then yeah it’s not cutting-edge by any means. Just maybe an incremental improvement over what they had.