1. 2

    Golang is exceptionally unportable, far more so than any other project I know (and I’m including ones like GCC here). Your regular everyday project doesn’t share those problems and works flawlessly on OSes and architectures that the authors never heard about.

    I strongly recommend that people don’t give strong meanings to whether a patch should be accepted and take it as it is. Is it a good patch? if yes, you don’t have to promise you will now forever support big endian strict alignment, you can simply accept the current patch and be welcoming to the contributor.

      1. 7
        1. 7

          Seems there are some harsh words and threats exchanged: https://lists.zx2c4.com/pipermail/wireguard/2021-March/006499.html

          1. 6

            I don’t know what it is with Netgate but they seem embroiled in needless drama ridiculously often and seem to have a massive persecution complex. This entire thing seems right on par with what I’ve come to expect from them. Must be some fumes in the Netgate office building or something.

            1. 4

              Also note that the quotes are of a discussion that wasn’t posted to this mailing list.

          1. 5

            I don’t think I like this.

            pyca/cryptography users have been asking for support of Debian Buster (current stable), Alpine 3.12, and other platforms with older Rust compiler versions.

            First, precompiled wheels should work fine (at least on Debian, I guess Alpine is a different story, since it uses MUSL), since there is no Rust dependency for binaries. Secondly, if maintainers and users of some distributions want to use old or even ancient versions of software, power to them. But then you also get to carry the burden of maintaining fixes and compatibility against old versions.

            I think the real issue is not that the Rust ecosystem is progressing, but that some distribution package systems are not really well-adapted to modern language ecosystems. Let’s not put Rust/Go, etc. in the same situation as C++ where we have to wait years to move forward, because some people are running Ubuntu 14.04 or 16.04 LTS.

            1. 5

              Secondly, if maintainers and users of some distributions want to use old or even ancient versions of software, power to them. But then you also get to carry the burden of maintaining fixes and compatibility against old versions.

              I think this needs a specific scale to be meaningful. Rust 1.41 is 1 year, 2 weeks old. This feels pretty new to me. Especially considering that this is a foundational package (so it imposes MSRV on all reverse dependencies), and considering that it targets non-Rust ecosystem.

              1. 5

                Note that Red Hat releases Red Hat Developer Toolset so that stable distribution users can use the latest toolchain. It’s Ubuntu and Debian’s fault, not fault of stable distributions in general.

                1. 4

                  It’s not just that “distro packaging systems are slower”. Packaging Rust can be a nightmare. Typically distros will have a wider set of supported platforms than Rust has in tier 1, so package maintainers are receiving the tier2 and tier3 experience of Rust, which is significantly worse.

                  Any release of Rust needs to be built with a previous version of Rust, and in a tier2/3 system this means someone has to create a bootstrap binary for a wide set of platforms. It’s not uncommon for this process to run into issues because tier2/3 platforms are also not tested.

                  This process has to be repeated every 6 weeks per the release schedule, so you never catch a break from it.

                  Our current trouble with rust is that our way of building Rust bootstraps is building them for an older system (so it runs on all the newer ones), and shipping the bootstrap binary with the libraries it needs. Then, Rust is built properly against the actual libraries for the system. This is starting to fall apart because Rust is really hard to build on these tier2/3 platforms, including “simply running out of address space on 32-bit platforms”. Someone will have to re-do the whole thing from scratch and ship a binary-only compiler, and somehow find an answer that doesn’t require us to rebuild a bootstrap binary every time the packages it depends on get updated.

                  Python adopting Rust in fundamental packages means that as long as these fires haven’t been put out, a huge chunk of packages will no longer work.

                  Realistically, we are probably going to package the pre-Rust versions and whenever Rust doesn’t work, switch to the old one. But this possibility has a limited shelf life.

                1. 6

                  I think most of the frustration with Wayland doesn’t come from Wayland itself per-se, but its authors and the software related to it. People (myself included) dislike the freedesktop/GNOME/systemd/Flatpak centralization of the Linux desktop propagated by Red Hat, and Wayland is an easy target because it’s “coming for your workflow!!”, so to speak. And to be fair, I also dislike the forcing of Wayland with new versions of Fedora (and I think Ubuntu as well, correct me if I’m wrong) because programs that people are used to no longer work. It’s frustrating when your workflow breaks because of things outside your control, and Wayland is the scapegoat in this situation.

                  That being said, GNOME people are hardly the easiest people to negotiate with (lol no thumbnails in file picker), and that only stokes the fire.

                  1. 4

                    If you don’t want your workflow to be broken, Fedora is the wrong distro for you. It’s very experimental and jumping all the new hotness on principle.

                  1. 7

                    M1 … seems to be the first case of an ARMv8 SoC that removes the 32-bit execution unit from the CPU.

                    thunderxx, qualcomm centriq also dropped 32-bit compatibility.

                    1. 3

                      Cortex-A65(AE), Cortex-A34 too.

                    1. 1

                      I was (am?) on the Docker train, I did deploy two Docker Swarm clusters, but I never got around to Kubernetes. And at this point, I’m wondering (hoping?) whether I can just hold out until the next shiny thing comes along.

                      Docker is ok as a packaging format. I quite like the idea around layers. However I can’t shake the feeling that as runtime it’s rather wasteful use of hardware. If you run a k8 cluster on amazon it’s like virtualization upon virtualization (upon whatever virtualization Amazon uses we don’t see). This comes with a cost both in managing the complexity and use of hardware.

                      To top it off we have the hopelessly inefficient enterprise sector adding stuff like sidecar attachments for intrusion detection and deep packet inspection of these virtual networks.

                      I’m interested in trends that go the other way. Rust is cool, because with it comes a re-focus on efficient computing. Statically linked binaries would be a much simpler way of packaging than containers.

                      1. 1

                        k8s/docker/etc. don’t need to be virtualized, that is one of their selling points. Dunno if that’s how AWS does it, though.

                      1. 14

                        As someone who paid a fair bit of attention to the early docker world, and now seeing its commodification am left wondering “what was it”, I think this article does a good job of explaining it. What it doesn’t explain is… I was around at that early redhat time, when it was small, when you could shake Bob Young’s hand at a Linux meetup. Heck, I remember when google was a stanford.edu site… the question in my mind is… why did redhat and google succeed (as corporate entities) and docker not so much? Perhaps it was the locking in of the company name and the core tech? Perhaps the world of 2010-2020 was far more harsh to smaller businesses, perhaps they just overshot by trying to fight their competitors instead of partnering with them. That will probably have to wait for a HBR retrospective, but I’m not 100% psyched that the big incumbents won this.

                        1. 13

                          Docker lost, as I understand it, because of commoditisation. There’s a bunch of goo in Linux to try to emulate FreeBSD jails / Solaris Zones and Docker provided some tooling for configuring this (now fully subsumed by containerd / runc), for building tarballs (not really something that needs a big software stack), and for describing how different tarballs should be extracted and combined using overlay filesystems (useful, but should not be a large amount of code and now largely replaced by the OCI format and containerd). Their two valuable things were:

                          • A proprietary build of a project that they released as open source that provided tooling for building container images.
                          • A repository of published container images.

                          The first of these is not actually more valuable than the open source version and is now quite crufty and so now has a load of competitors. The second is something that they tried to monetise, leaving them open to competitors who get their money from other things. Any cloud provider has an incentive to provide cheap or free container registries because a load of the people deploying the containers will be spending money to buy cloud resources to run them. Docker didn’t have any equivalent. Running a container registry is now a commodity offering and Docker doesn’t have anything valuable to couple their specific registry to that would make it more attractive.

                          1. 9

                            I wrote a bit about that here – Docker also failed to compete with Heroku, under its former name dotCloud.

                            https://news.ycombinator.com/item?id=25330023

                            I don’t think the comparison to Google makes much sense. I mean Google has a totally different business that prints loads of money. If Docker were a subdivision of Google, it could lose money for 20 years and nobody would notice.

                            As for Red hat, this article has some interesting experiences:

                            Why There Will Never Be Another RedHat: The Economics Of Open Source

                            https://techcrunch.com/2014/02/13/please-dont-tell-me-you-want-to-be-the-next-red-hat/

                            To make matters worse, the more successful an open source project, the more large companies want to co-opt the code base. I experienced this first-hand as CEO at XenSource, where every major software and hardware company leveraged our code base with nearly zero revenue coming back to us. We had made the product so easy to use and so important, that we had out-engineered ourselves.

                            (Although I don’t think Docker did much engineering. It wasn’t that capable a product. It could have been 30 to 100 people at Google implementing it, etc. Previous thread: https://lobste.rs/s/kj6vtn/it_s_time_say_goodbye_docker)

                            1. 3

                              I appreciate the article on RedHat. It has certainly opened my eyes to the troubles with their business model, which I had admired in the past. (I suppose it is still admirable, but now at least I know why there aren’t more companies like it.)

                              The back half of the article is strange, though. I’m not sure what I’m supposed to learn about building a new business based around open source by looking at Microsoft, Amazon or Facebook. While they all contribute open source code now, they did not build their businesses by selling proprietary wrappers around open source products as far as I know. And given the enormity of those companies, it seems very hard to tell how feasible it would be to copy that behavior on a small scale. Github seems like a reasonable example of a company monetizing open source, however. It is at least clear that their primary business relies on maintaining git tools. I just wish the article included a few more examples of companies to look up to. Perhaps some lobsters have ideas.

                              1. 5

                                I just wish the article included a few more examples of companies to look up to

                                To a first approximation, there are no companies to look up to.

                                1. 2

                                  I feel like some of the companies acquired by RedHat might be valid examples. I expect that the ones that are still recognizable as products being sold had a working model, but I don’t know what their earnings were like.

                                2. 3

                                  the biggest ones I can think of, not mentioned, are mongo and elastic… redis may go public soon, there are lots of corps around data storage and indexing that to some extent keep their core product free. There might be more. If you look at interesting failures, going back to the early days, LinuxCare was a large service oriented company that had a giant flop, as did VA Linux (over a longer time scale):

                                  linuxcare https://www.wsj.com/articles/SB955151887677940572

                                  va linux https://www.channelfutures.com/open-source/open-source-history-the-spectacular-rise-and-fall-of-va-linux

                                  1. 2

                                    Appreciate it, thanks.

                              2. 8

                                same question, I think, could be asked why netflix succeeded but blockbuster failed, both were doing very similar thing. It seems that market success consists of chains / graphs of very small incremental decisions. The closer decisions are to the companies ‘pivot time’, the more impactful they seem to be.

                                And, at least in my observation, paying well and listening to well-rounded+experienced and risk-taking folks – who join your endeavor early, pays with huge dividends later on.

                                In my subjective view, docker failed to visualize and execute on the overall ecosystem around their core technology. Folks who seem to have that vision (but perhaps, not always the core technology) are the ones at hashicorp. They are not readhat by any means, but any one of their oss+freemium products seem to have good cohesive and ‘efficient’ vision around the ecosystem in this space. (where by ‘efficient’ I mean that they do not make too many expensive and user-base jarring missteps).

                                1. 1

                                  could be asked why netflix succeeded but blockbuster failed, both were doing very similar thing

                                  I’m not sure I agree. Coincidentally, there’s a YT channel that I follow that did a decent overview on both of them:

                                2. 3

                                  My opinion on this is that both Google and Redhat are much closer to the cloud and the target market than Docker is/was.

                                  Also, I thought that Docker was continuously trying to figure out how to make a net income. They had Docker Enterprise before it was sold off, but imo I’m not sure how they were aiming to bring in income. And a startup without income is destined to eventually close up.

                                  1. 3

                                    the question in my mind is… why did redhat and google succeed (as corporate entities) and docker not so much?

                                    Curating a Linux distribution and keeping the security patches flowing seamlessly is hard work, which made Red Hat valuable. Indexing the entire Internet is also clearly a lot of hard work.

                                    By comparison, what Docker is doing as a runtime environment is just not that difficult to replace.

                                    1. 1

                                      I kinda feel like this is the ding ding ding answer… when your project attempts to replicate a project going on inside of a BigCo, you will have a hard time preventing embrace and extend. Or perhaps, if you are doing that, keep your company small, w/ limited debt, because you may find a niche in the future, but you can’t beat the big teams at the enterprise game, let alone a federation of them.

                                    2. 2

                                      I think we all know our true desires we are just left to discover them.-

                                      Lets not forget, The Docker Timeline:

                                      • Started in 2013.
                                      • Got open-source recognition.
                                      • Got increased public use in 2015/2016.
                                      • In 2017. project renamed from Docker to Moby. Mistake 1.
                                      • In 2018. started requiring User Registration on DockerHub. Mistake 2.
                                      • In 2019. Docker Database has been hacked which exposed user. Mistake 3.
                                      • In 2020. Docker finally died and awaits new reborn. Good bye.

                                      When I think about it, I’m not even mad. Hail death of Docker.

                                    1. 6

                                      I’m curious how does dockershim being removed from K8s leads to a conclusion that Docker Inc as a company is dying? As explained by many, Kubernetes team took that step to remove the bloat which was created by Docker in the codebase. But do you think people will go back to stop using docker CLI altogether and write 10 lines of bash script to spin up a new container, network etc? docker run is a UX layer on those containerd commands and I don’t see why people will stop using it just because K8s decided to remove the “dockershim” module. And how any of this has an affect on Docker Inc, that I’m still unable to understand AFAIK docker the CLI is open source and obv doesn’t generate any revenues for Docker Inc (which is what matters when we are talking about a company!)

                                      1. 5

                                        I think the reason that it points that direction is that there are multiple k8s providers and installers that default to the docker runtime (digitalocean managed k8s uses docker as the runtime, and kubespray defaults to it as well but also supports CRI-O). With dockershim going away where else will docker be used other than developers’ desktops?

                                        Personally I bit the bullet was basically forced to switch to podman/buildah due to docker straight up not supporting Fedora 32+ due to the kernel change to cgroups v2. Docker Desktop for Mac/Windows is a nice product for running containers on those OS’ but my guess is that is the only place it will stay relevant. It’s easy enough to have a docker-compatible cli aliased to docker that doesn’t require the daemon on linux etc.

                                        Also, with their attempts at monetizing DockerHub it kind of paints a “failing” aura over the company. If they can’t make money off of DockerHub how can they monetize a daemon that runs containers when there are many other equivalent solutions?

                                        1. 1

                                          multiple k8s providers and installers that default to the docker runtime (digitalocean managed k8s uses docker as the runtime, and kubespray defaults to it as well but also supports CRI-O). With dockershim going away where else will docker be used other than developers’ desktops?

                                          SImilarly microk8s, k3s are using containerd since forever.

                                          With dockershim going away where else will docker be used other than developers’ desktops?

                                          Yep, exactly. It will be used by end developers just the way it is right now. I understand there are more lightweight alternatives for building images (esp something which doesn’t require you to run a local daemon) that are more lucrative. But not everyone runs K8s and I think there’s a large market out there for people running standard installations of their software just with docker/docker-compose :)

                                          1. 2

                                            I think there’s a large market out there for people running standard installations of their software just with docker/docker-compose

                                            This is extremely true. I have many friends/colleagues who use docker-compose every day and there is no replacement for it yet without running some monster of a container orchestration system (compared to compose at least).

                                            I guess my main worry is that docker is a company based on products which they are having an extremely hard time monetizing (especially after they spun off their Docker EE division). I don’t see much of a future for Docker (the company) even if loads of developers use it on their desktops.

                                            1. 2

                                              docker compose was based on an acquihire of the folks that made fig.sh, then very little ever happened feature-wise. Super useful tool and if they’d been able to make it seamless with deployment (which is very hard it seems) the store might’ve been different.

                                            1. 1

                                              Yep, I appreciate that they finally made it available for Fedora 32 (after having to tweak kernel args), but many of us already switched to alternatives.

                                              They still don’t ship repos for Fedora 33 (the current release). After checking the GitHub issue related to supporting Fedora 33 it appears the repo is now live, even though it only contains containerd.

                                        1. 3

                                          This is very hypocritical considering Google’s own browsers is one of the worst offenders of User-Agent impersonation. This is a User-Agent you might see from it: “Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/74.0.3729.169 Chrome/74.0.3729.169 Safari/537.36”.

                                          This might have additional unintended effects on mail clients as they may open an embedded browser for the OAuth2 dialog.

                                          1. 8

                                            It would be nice that people remembered that you don’t have to leave scorched earth in your wake as you leave a project. It’s not a surprise to me that Xorg is begging for working hands when its active maintainers seem to repeatedly write blog posts telling people it’s deprecated.

                                            1. 6

                                              The main selling point of lobsters to me is the authored posts and interaction in the comments. I’d rather not see a blanket ban just because there’s some abuse of it.

                                              1. 1

                                                (Let’s read in to this way too much!)

                                                You don’t need to be that person to have a fun GitHub contribution graph.

                                                The point is a good one - GitHub contribution graphs can be overrated. However, contributor graphs do show a developer’s consistency. That’s important for any skill, and at any proficiency.

                                                1. 4

                                                  That sounds like a really unconvincing metric. How do you tell if they did work that isn’t counted in GitHub’s metrics?

                                                  1. 2

                                                    I’m not very consistent so I’m glad I can employ software to convince others that I am.

                                                    Related: https://twitter.com/catcarbn/status/1306244325995495426?s=20

                                                  1. 5

                                                    It’s important to keep in perspective that the backwards incompatibility issue with non-module builds that was patched into older releases was done so all the way back into the Go 1.9 release branch. Go 1.9 was originally released in August of 2017. I don’t know of anyone that is still using a Go release from back then, which is mostly a product of how well the Go team prioritizes not introducing breaking changes. The Go team also only provides security patches for the latest two version branches of Go (currently, 1.14 and 1.15), so nobody should be using a release that old regardless.

                                                    1. 2

                                                      It was painful from the developer perspective because your dependencies and deps-of-deps might have not had module support, and might already be a v2/v3 etc. (thus blocked you from sanely supporting non-module and module at the transition period)

                                                      1. 4

                                                        There was never a need for your dependencies, or their dependencies, to be or understand modules in order to consume them in a module build. They just worked. The only module “feature” that wouldn’t have been possible with this code is the inability to contain multiple major versions of a package, since that does rely on semantic import versioning (what this author is challenging).

                                                    1. 17

                                                      Oh wow, and just when I thought the go dependency mess couldn’t possible have gotten worse than ca ~2016…

                                                      1. 11

                                                        What on earth are they up to? I think this started with GOPATH, a bizarre fixed path that other languages are quite alright without. It’s just descended into an increasing mess since. There are some people with half a century plus of experience, yet this seeming aimlessness - what are they trying to achieve?

                                                        1. 3

                                                          what do you mean by a fixed path? don’t all languages need a location for their libraries?

                                                          1. 6

                                                            Source code had to live in a particular path too. It shouldn’t be required now, but I recently had a cryptic code generation issue magically solved when I moved my code to ~/go/src/github.com/name/name.

                                                      1. 4

                                                        Damn that’s a low bounty. That’s pretty much the holy grail of vulnerabilities.

                                                        1. 6

                                                          Great! Looking forward to it!

                                                          optimizations for SSDs,

                                                          Always welcome, I didn’t do any research what this means exactly, let’s hope it is not just marketing :)

                                                          The switch to Btrfs will use a single-partition disk layout, and Btrfs’ built-in volume management. The previous default layout placed constraints on disk usage that can be a difficult adjustment for novice users. Btrfs solves this problem by avoiding it.

                                                          Neat. The default layout before (in Fedora) made / way too big, wasting a lot of space that could have been used in ${HOME}… especially on (smallish) SSDs.

                                                          1. 5

                                                            optimizations for SSDs, Always welcome, I didn’t do any research what this means exactly, let’s hope it is not just marketing :)

                                                            Btrfs supports compression, as the article says. Enabling compression is expected to improve the SSD’s life span because you have fewer data to write. (the less you write, the less the disk wears out.) Of course, it may depend highly on your workload.

                                                            1. 1

                                                              For SSDs, are they completely random access? Or is there some benefit to storing related blocks close together ala on a spinning disk for sequential read?

                                                              1. 3

                                                                There’s some setup cost to select the block in which the sector resides, so accessing adjacent sectors is still faster than truly random accesses.

                                                                1. 2

                                                                  If you look at SSD benchmarks, they do in fact give better throughput in sequential reads vs. random.

                                                              2. 3

                                                                I must be an outlier in finding their / very small. I use docker and had to move where it puts its images because Fedora made my / too tiny.

                                                                1. 3

                                                                  I’m using VMs in GNOME Boxes and they are stored in ${HOME}/.local/share/gnome-boxes so the exact opposite :-D In either case it should be solved with btrfs…

                                                              1. 4

                                                                Can’t just change function names willy-nilly, that’d ruin the distribution of function names by length

                                                                1. 5

                                                                  Back when architectures were designed for human assembly programmers and not just as compiler targets. Having a simple and elegant instruction set was considered a selling point.

                                                                  If you’re interested in learning an assembly language, you’d be hard-pressed to find a better one than m68k.

                                                                  1. 5

                                                                    Back when architectures were designed for human assembly programmers and not just as compiler targets. Having a simple and elegant instruction set was considered a selling point

                                                                    Turns out it also makes it hard to make a fast processor. Mashey believed it wasn’t the number of instructions, but the ergonomic and symmetrical forms that lead to things like memory-to-memory instructions, which makes it harder to optimize. (Memory decode, dependencies, etc. when it gets broken down into µops…)

                                                                    Ironically, the ugly duckling of CISC, x86, is ugly in ways that mostly don’t matter for performance. IBM System/3x0 is actually pretty clean, and arguably on the borderline of RISC with clean mostly fixed instruction forms and mostly schewing memory to memory instructions. (Arguably, the Model 44 comes pretty close to RISC!) I don’t think it’s a coincidence that x86 and z are around today while the more aggressively assembly-friendly architectures like VAX and 68k died.

                                                                    1. 3

                                                                      It isn’t at all a coincidence; x86 and the Z series are much easier to implement than the 68k or the VAX.
                                                                      John Mashey discusses the difficulties with implementing a high-speed VAX here.

                                                                    2. 2

                                                                      If you’re interested in learning an assembly language, you’d be hard-pressed to find a better one than m68k.

                                                                      Someone who wrote several assemblers thinks MSP430, MIPS, and AVR8 are the cleanest architectures:

                                                                      https://github.com/mikeakohn/naken_asm/issues/60#issuecomment-471514168

                                                                      1. 2

                                                                        I think that refers to parsing by a machine, and indeed MIPS was designed to be very easy to parse (as were other RISC ISAs), but not to ease of writing the instructions by a human.

                                                                        I’ve found MIPS to be somewhat obnoxious to write, but I realize my experiences refer to privileged code intended to work on multiple machines, so aren’t the typical MIPS experience.

                                                                        1. 2

                                                                          Having worked on a MIPS implementation and the LLVM MIPS back end, I’d agree that MIPS is clean from the perspective of writing an assembler or instruction decoder, as long as we’re talking about MIPS IV and not the newer MIPS32 and MIPS64. That is; however, the only positive thing that I could think of to say about the ISA.

                                                                          1. 2

                                                                            While I still love the m68k, See MIPS Run is the best processor architecture book I’ve ever read…

                                                                          2. 1

                                                                            I think AVR is a good, modern-day contender that I would recommend to anyone looking to get started with Assembly.

                                                                            1. 1

                                                                              risc-v?

                                                                              1. 5

                                                                                risc-v?

                                                                                If you want to get an understanding of a simple close-to-the-metal environment, RISC-V is fine. If you want to write assembly code, it’s painful. The lack of complex addressing modes means that you end up burning registers and doing arithmetic for simple tasks. If you want to do complex things like bitfield manipulation, you either need to write a lot of logic with shifts and masks or you need to use an extension (I think the bitmanip extension is standardised now, but the cores from ETH have their own variants). There are lots of clunky things in RISC-V.

                                                                                ARM (AArch32 or AArch64) is much nicer to use as an assembly programmer. Both are big instruction sets, but the addressing modes on ARM are really nice to work with (it’s almost as if they, unlike the RISC-V project, followed the RISC I methodology of examining the output from compilers and working out what the common sequences of operations were, before designing an instruction set).

                                                                                Note that ARM doesn’t call itself a RISC ISA anymore, it calls itself a load-store architecture. This is one of the key points of RISC (memory-register and memory-memory instructions make out-of-order execution difficult), but they’re definitely not a small ISA. They do have a much more efficient encoding than RISC-V (which, in a massive case of premature optimisation, optimised the ISA to be simple to decode in an in-order pipeline).

                                                                                1. 2

                                                                                  ah, I made the mistake of assuming that the smaller instruction set of the risc-v meant that it was easier to work with.

                                                                          1. 5

                                                                            This always feels fake to me. That is because I am not from an anglo-saxon background and in my culture this would be considered fake friendliness. It reminds of this weird pattern where if you ask somebody something and they always start with “great question!”. I feel like I am treated like a kindergardener or something. Get to the point already, I have other things to do.

                                                                            I think that if you want to be able to thrive in tech you have to be able to endure a certain level of pain. Computers are a gigantic mess. There is rubberband and duct tape everywhere and things are broken all the time. I am not feeling sorry for you because something is not working for you. That is normal and you should accept that it is normal. It is part of this profession that things are messy and if you cannot handle a neutral answer to a seemingly normal problem, you are going to have a bad time.

                                                                            You will always get my empathy if you have personal issues that you are dealing with, but broken software is part of what we do.

                                                                            1. 3

                                                                              I agree on the cultural background. For my own background, expressing emotions and concern I don’t have seems like deception. It’s often mentioned as a weird point when communicating with people from the US. I’m happy to use fuller sentences though.

                                                                              1. 2

                                                                                I’m not entirely sure what you mean by “anglo-saxon” background. Insofar as that term refers to the shared culture of white English-speaking people across the several major Anglophone countries, I am from that background and I completely agree that it feels like fake friendliness, and has the air of a schoolteacher addressing a small child. That said, I think this is a kind of fake friendliness that is (unfortunately) common in the white anglosphere, particularly among women, and I think people from other cultural backgrounds might be less likely to insist that this kind of cloying language is necessary to show empathy.

                                                                              1. 7

                                                                                When I paint a mental image of these big-tech-interviews, I imagine a monkey jumping through hoops.

                                                                                It still has to be proven that implementing algorithms quickly and explaining how a 3-way-handshake works is relevantly correlated with the position at hand. I’m sad to see that the computer-science-interview process has more and more adapted to this mode rather than check if a candidate as a person brings in the right philosophy.

                                                                                Indeed, there needs to be qualification at hand, and it would be possible to check this in a small subsection of an interview, however, making it almost the sole aspect is worrying. When these people rise up higher in the hierarchy of these companies, other skills are more relevant (soft skills, emotional intelligence, understanding office politics).

                                                                                When these big-tech-companies don’t make selections based on that we shouldn’t be surprised when we end up with managers who lack said skills, are unable to make good business decision and might even be cold-hearted sociopaths.

                                                                                1. 14

                                                                                  I haven’t seen it, but I know (several of my colleagues were there when it happened) that they did an internal study at a former workplace, some time before I’d joined, as part of a wider effort to (potentially) revamp the interviewing process after a large reorg. The findings were pretty much unsurprising. It turned out, first of all, that there wasn’t much correlation between the performance in the algorithm-heavy test and post-hiring activity. Worse, though, it turned out that performance in the interview-heavy test wasn’t a good predictor for the hire/no hire feedback, either. People who did very poorly usually got a no hire, but once you got past the “doesn’t know what a linked list is” level, lots of people did great, or at least okay, and got a no hire feedback, and lots of people did poorly but got a “hire” feedback.

                                                                                  Eventually, the whole mechanism remained in place (!!), for two reasons.

                                                                                  First, no one had an acceptable suggestion for how to go about evaluating fresh graduates (for various reasons, tests that you could take home with your weren’t considered a good idea).

                                                                                  Second, while virtually all programmers agreed that the tests were useless, virtually all hiring managers wanted them to stay. Realistically, if you cut through the standard corporate doublespeak, they wanted it to stay for for two reasons. The most important one was that the test and the hire/no hire feedback gave them a rock-solid paper trail for every decision – no matter what happened, they provided the perfect cover. If job performance was terrible, then:

                                                                                  • Terrible score, good feedback? I trust my team to make the right decision, mistakes come with the territory of that, and numbers never paint the full picture of a person anyway.
                                                                                  • Good score, bad feedback? They did good on the interview, we had our doubts but we have to stay metrics-driven in our decisions.
                                                                                  • Good score, good feedback? They looked great in the interview.

                                                                                  Bad score and bad feedback obviously didn’t get you hired, and hiring people who did great on the job was obviously considered a success so nobody bothered to examine how that happened.

                                                                                  The other reason, which I have heard on more than one occasion (and not just there), is that, I quote, “people know that you have to learn these things if you want to get your foot in the door in our industry and we want to hire people who are willing to do that kind of work”.

                                                                                  1. 2

                                                                                    Even within Google there’s been recognition (over the past few years) that these whiteboard algorithm interviews are not very predictive of future job performance. We’ve been experimenting with a few alternate approaches, including (for new grads only) an evidence-based path to hiring: even if you don’t seem to be good at algorithms-on-whiteboard during interviews (but can at least write decent code on a laptop & display evidence of being able to learn new concepts during an interview) you can get a 1-year contract to actually work on upto 2 different teams. After that it’s much easier to base a hiring decision on your actual work.

                                                                                    1. 1

                                                                                      Did we work at the same company?

                                                                                      1. 2

                                                                                        I don’t know, but the stuff above describes almost every large company I’ve seen – so I guess in a way we did :-).

                                                                                    2. 8

                                                                                      Apple’s the only FAANG I’ve never been in the pipeline for, so I can’t comment on them. Facebook and Google both seem to be exactly what the stereotypes say: endless laborious algorithm-challenge stuff.

                                                                                      Netflix, though, I don’t know if it was just the specific team or not, but their process was really quick. I think something like ten days total from first phone call to onsite. And all the technical sessions involved realistic things the team would actually be doing, and seemed to evaluate on things that would actually matter on-the-job.

                                                                                      1. 3

                                                                                        Netflix generally does things differently to the big tech companies I believe. It doesn’t surprise me to hear that their hiring process is well thought out too.

                                                                                        1. 1

                                                                                          I feel like Netflix shouldn’t be compared to the other 4 companies in the list. It’s a lot smaller than them. Thinking of what the acronym would be without the N does explain why people feel the need to include it though.

                                                                                        2. 7

                                                                                          When I used to interview engineers for FB, the most obvious thing I could tell was when someone had found our interview questions and rehearsed the answers. Beyond that, they weren’t that useful.

                                                                                          1. 1

                                                                                            How did you judge people who you suspected had rehearsed? Did you see them as cheaters, capable, etc.?

                                                                                            1. 4

                                                                                              Where they had clearly regurgitated a memorised answer to the interview question, I noted that I didn’t have any information on their ability to solve novel programming problems.

                                                                                          2. 4

                                                                                            I feel the same way, even though I think for an SRE the question regarding the TCP three-way-handshake is relevant.

                                                                                            However, I also think that the interview process of a big tech company would for the goals of the process and the company not at all benefit from philosophy. “Monkeys jumping through hoops” fits a lot better. Unless you created some widely used programming language, kernel, or are otherwise very distinct in first place in big companies by nature you are a tiny gear and as such an interview process tries to find out whether you can be a tiny gear.

                                                                                            Also when you are a big (as in many employees) company what’s important is to have people that quickly can, come, go, be replaced, if needed, simply because it happens more often. While inside a team philosophical alignment probably makes sense, especially in smaller teams where you get to know all the people, to work efficiently, which I guess is why the “eating together” happened I don’t think it is really relevant to have this on a company-wide level.

                                                                                            At the end of the day as a regular SRE at a big company you want to know TCP, BGP in and out to troubleshoot problems that occur. I assume the parameters would be something like “doesn’t require long at on-boarding”, “has experience with something that looks like what we have”, “will try to do his best to do what this position requires”, so overall is cost-effective.

                                                                                            I also assume that consistent quality work is more important for that position, than for example having people excel , because they perfectly fit also on a personal level At least to me the generic introduction by the recruiters sounds like a “one of many SRE engineers” position, potentially with part of the pay is being able to list Google in the CV.

                                                                                            What I want to say is that this process might work very well for Google, because of its size, company structure, form and goals. And given that you sound like you would not want to go through such a process you might not be part of their target audience.

                                                                                            Maybe it’s possible to compare this with developing and using Kubernetes or a programming language like Go (just random examples) is not the right decision for everyone, no even somewhat similar companies, when it might be for Google.