Threads for coypoop

  1. 2

    We were promised 1/10th of the $200 million, or $20 million in stock, on completion. $10 million to me, $5 million to Ed, and $5 million to Karen Crippes,

    This is very far removed from the compensation I am used to seeing, is this normal and I’m just oblivious?

    1. 7

      A separate issue, but apparently they didn’t receive those stocks. (Very much a tangent, but maybe of interest.)

      1. 5

        I was at Apple at the time, in the macOS division, and this was very far from normal, at least in my experience.

        I think in this case it has something to do with the very large expense of not doing this, the mind numbing grunge of the actual work, and the very small number of people actually qualified to do it. Kind of a perfect storm.

        1. 1

          It looks quite high but it’s not unusual for companies to give occasional one-off bonuses that may be a multiple of their normal compensation to employees who were instrumental in shipping products that brought in large amounts of revenue. From the article, it sounds as if the author was someone who was right at the top of the engineering track. Levels.fyi doesn’t have data from people that high up in any of the big tech companies except Google (where it has a single data point for an E9 engineer making around $4.5/year). Two levels below that at Apple they have someone making over $1m/year.

          I’m not sure exactly how you can extrapolate across the industry from this. At Microsoft, the base salary scales roughly linearly with level but the stock and bonus amounts scale with some polynomial factor (on the assumption that the more influence you have over the overall success of the company, the more your total compensation should reflect this).

          Even accounting for inflation, this looks like it’s a large but not unbelievable bonus amount for one of the most senior engineers for completion of a project that saved the company a much larger amount. That said, it sounds as if the team were never actually paid this bonus, so who knows? I guess the lesson is that if you’re promised a large bonus in advance, get it in writing.

          1. 1

            Answered in the very next paragraph

            I got the $10 million, because it was going to be my job on the line, and potentially, my ability to work in industry at a high level, ever again, in the future.

          1. 6

            Not from programming but a younger me communicated so much using a computer that instead of a voice in my head, I’d imagine myself typing my thoughts out on a keyboard.

            1. 2

              I still do this.

            1. 16

              There’s a lot of good stuff in here that we all think everyone knows and we say to each other in the pub but we don’t really say out loud to the people that need to hear it.

              The main one that comes to mind is about mobility. They said something like “if I get fired I’ll have a new job in two weeks.” The tech folks that don’t know this is true need to learn it. More importantly: the people who manage tech people need to learn it.

              1. 22

                if I get fired I’ll have a new job in two weeks.

                This has never been true for me. Job hunting has always been a relentless slog.

                1. 12

                  Imma guess it depends on where you are. Silicon Valley, Seattle, NYC, London, you can basically put your desk stuff in a box, and throw it out a window and have it land in another tech company’s lobby.

                  Other places, not so much.

                  1. 9

                    I agree living in a tech hub makes finding a job way easier, but I jump to temper the hyperbole just a bit. I know that I personally felt a lot of self-hatred when I tried to change jobs and it took months of applications and references and interviews to actually get one, even living in a tech hub.

                    1. 6

                      Technology stacks don’t really matter because there are like 15 basic patterns of software engineering in my field that apply. I work in data so it’s not going to be the same as webdev or embedded.

                      It depends on what you do. The author is a database specialist, so of course they’re going to claim that SQL is the ultimate language and that jobs are plentiful. I’m an SRE, so my career path requires me to pick specific backend-ready languages to learn. I have several great memories of failed interviews because I didn’t have precisely the right tools under the belt:

                      • I worked on a Free Software library in Python along with other folks. They invited me to interview at their employer. Their employer offered me a position writing Lua for production backends. To this day, I still think that this was a bait-and-switch.
                      • I interviewed at a local startup that was personally significant in my life. I had known that it wasn’t a good fit. Their champion had just quit and left behind a frontend written with the trendiest JS libraries, locking their main product into a rigid unmaintainable monolith. I didn’t know the exact combination of five libraries that they had used.
                      • I interviewed at a multinational group for a position handling Kubernetes. I gathered that they had their own in-house monitoring instead of Prometheus, in-house authentication, etc. They also had a clothing line, and I’m still not sure whether I was turned down because I didn’t already know their in-house tools or because I wasn’t wearing their clothes.
                      1. 3

                        They also had a clothing line, and I’m still not sure whether I was turned down because I didn’t already know their in-house tools or because I wasn’t wearing their clothes.

                        Seems like a blessing in disguise if it was the clothes.

                      2. 3

                        I have this problem and I’m in a tech hub. Most of my coworkers and technical friends are in different countries I can’t legally work in, so I rarely get interviews through networking. Interviewing is also not smooth sailing afterwards.

                      3. 5

                        This has never been true for me. Job hunting has always been a relentless slog.

                        Same here, I also live in a city with many startups, but companies I actually want to work for, which do things I think are worthwhile, are very rare.

                      4. 7

                        There’s a lot of good stuff in here that we all think everyone knows and we say to each other in the pub but we don’t really say out loud to the people that need to hear it.

                        Interesting that you say that in the context of modern IT. It has been so with many things since ancient time.

                        https://en.wikipedia.org/wiki/In_vino_veritas

                        Perhaps the traditional after-work Friday beer plays a more important role in one’s career than most people think. Wisdom is valuable and not available ons course you can sign up to.

                        1. 1

                          Wisdom is valuable and not available ons course you can sign up to.

                          Which is ironic given wisdom is often what they’re being sold as providing.

                        2. 5

                          The main one that comes to mind is about mobility. They said something like “if I get fired I’ll have a new job in two weeks.” The tech folks that don’t know this is true need to learn it. More importantly: the people who manage tech people need to learn it.

                          Retention is a big problem. It can take up to a year to ramp up even a senior person to be fully productive on a complicated legacy code base. Take care of your employees and make sure they are paid a fair wage and not under the pressure cooker of bad management who thinks yelling fixes problems.

                          1. 2

                            That’s probably why the OP says their salary went up 50% while their responsibilities reduced by 50%. Onboarding.

                        1. 2

                          Golang is exceptionally unportable, far more so than any other project I know (and I’m including ones like GCC here). Your regular everyday project doesn’t share those problems and works flawlessly on OSes and architectures that the authors never heard about.

                          I strongly recommend that people don’t give strong meanings to whether a patch should be accepted and take it as it is. Is it a good patch? if yes, you don’t have to promise you will now forever support big endian strict alignment, you can simply accept the current patch and be welcoming to the contributor.

                            1. 7

                              Seems there are some harsh words and threats exchanged: https://lists.zx2c4.com/pipermail/wireguard/2021-March/006499.html

                              1. 6

                                I don’t know what it is with Netgate but they seem embroiled in needless drama ridiculously often and seem to have a massive persecution complex. This entire thing seems right on par with what I’ve come to expect from them. Must be some fumes in the Netgate office building or something.

                                1. 4

                                  Also note that the quotes are of a discussion that wasn’t posted to this mailing list.

                                2. 7
                                1. 5

                                  I don’t think I like this.

                                  pyca/cryptography users have been asking for support of Debian Buster (current stable), Alpine 3.12, and other platforms with older Rust compiler versions.

                                  First, precompiled wheels should work fine (at least on Debian, I guess Alpine is a different story, since it uses MUSL), since there is no Rust dependency for binaries. Secondly, if maintainers and users of some distributions want to use old or even ancient versions of software, power to them. But then you also get to carry the burden of maintaining fixes and compatibility against old versions.

                                  I think the real issue is not that the Rust ecosystem is progressing, but that some distribution package systems are not really well-adapted to modern language ecosystems. Let’s not put Rust/Go, etc. in the same situation as C++ where we have to wait years to move forward, because some people are running Ubuntu 14.04 or 16.04 LTS.

                                  1. 5

                                    Note that Red Hat releases Red Hat Developer Toolset so that stable distribution users can use the latest toolchain. It’s Ubuntu and Debian’s fault, not fault of stable distributions in general.

                                    1. 5

                                      Secondly, if maintainers and users of some distributions want to use old or even ancient versions of software, power to them. But then you also get to carry the burden of maintaining fixes and compatibility against old versions.

                                      I think this needs a specific scale to be meaningful. Rust 1.41 is 1 year, 2 weeks old. This feels pretty new to me. Especially considering that this is a foundational package (so it imposes MSRV on all reverse dependencies), and considering that it targets non-Rust ecosystem.

                                      1. 4

                                        It’s not just that “distro packaging systems are slower”. Packaging Rust can be a nightmare. Typically distros will have a wider set of supported platforms than Rust has in tier 1, so package maintainers are receiving the tier2 and tier3 experience of Rust, which is significantly worse.

                                        Any release of Rust needs to be built with a previous version of Rust, and in a tier2/3 system this means someone has to create a bootstrap binary for a wide set of platforms. It’s not uncommon for this process to run into issues because tier2/3 platforms are also not tested.

                                        This process has to be repeated every 6 weeks per the release schedule, so you never catch a break from it.

                                        Our current trouble with rust is that our way of building Rust bootstraps is building them for an older system (so it runs on all the newer ones), and shipping the bootstrap binary with the libraries it needs. Then, Rust is built properly against the actual libraries for the system. This is starting to fall apart because Rust is really hard to build on these tier2/3 platforms, including “simply running out of address space on 32-bit platforms”. Someone will have to re-do the whole thing from scratch and ship a binary-only compiler, and somehow find an answer that doesn’t require us to rebuild a bootstrap binary every time the packages it depends on get updated.

                                        Python adopting Rust in fundamental packages means that as long as these fires haven’t been put out, a huge chunk of packages will no longer work.

                                        Realistically, we are probably going to package the pre-Rust versions and whenever Rust doesn’t work, switch to the old one. But this possibility has a limited shelf life.

                                      1. 6

                                        I think most of the frustration with Wayland doesn’t come from Wayland itself per-se, but its authors and the software related to it. People (myself included) dislike the freedesktop/GNOME/systemd/Flatpak centralization of the Linux desktop propagated by Red Hat, and Wayland is an easy target because it’s “coming for your workflow!!”, so to speak. And to be fair, I also dislike the forcing of Wayland with new versions of Fedora (and I think Ubuntu as well, correct me if I’m wrong) because programs that people are used to no longer work. It’s frustrating when your workflow breaks because of things outside your control, and Wayland is the scapegoat in this situation.

                                        That being said, GNOME people are hardly the easiest people to negotiate with (lol no thumbnails in file picker), and that only stokes the fire.

                                        1. 4

                                          If you don’t want your workflow to be broken, Fedora is the wrong distro for you. It’s very experimental and jumping all the new hotness on principle.

                                        1. 7

                                          M1 … seems to be the first case of an ARMv8 SoC that removes the 32-bit execution unit from the CPU.

                                          thunderxx, qualcomm centriq also dropped 32-bit compatibility.

                                          1. 3

                                            Cortex-A65(AE), Cortex-A34 too.

                                          1. 1

                                            I was (am?) on the Docker train, I did deploy two Docker Swarm clusters, but I never got around to Kubernetes. And at this point, I’m wondering (hoping?) whether I can just hold out until the next shiny thing comes along.

                                            Docker is ok as a packaging format. I quite like the idea around layers. However I can’t shake the feeling that as runtime it’s rather wasteful use of hardware. If you run a k8 cluster on amazon it’s like virtualization upon virtualization (upon whatever virtualization Amazon uses we don’t see). This comes with a cost both in managing the complexity and use of hardware.

                                            To top it off we have the hopelessly inefficient enterprise sector adding stuff like sidecar attachments for intrusion detection and deep packet inspection of these virtual networks.

                                            I’m interested in trends that go the other way. Rust is cool, because with it comes a re-focus on efficient computing. Statically linked binaries would be a much simpler way of packaging than containers.

                                            1. 1

                                              k8s/docker/etc. don’t need to be virtualized, that is one of their selling points. Dunno if that’s how AWS does it, though.

                                            1. 14

                                              As someone who paid a fair bit of attention to the early docker world, and now seeing its commodification am left wondering “what was it”, I think this article does a good job of explaining it. What it doesn’t explain is… I was around at that early redhat time, when it was small, when you could shake Bob Young’s hand at a Linux meetup. Heck, I remember when google was a stanford.edu site… the question in my mind is… why did redhat and google succeed (as corporate entities) and docker not so much? Perhaps it was the locking in of the company name and the core tech? Perhaps the world of 2010-2020 was far more harsh to smaller businesses, perhaps they just overshot by trying to fight their competitors instead of partnering with them. That will probably have to wait for a HBR retrospective, but I’m not 100% psyched that the big incumbents won this.

                                              1. 13

                                                Docker lost, as I understand it, because of commoditisation. There’s a bunch of goo in Linux to try to emulate FreeBSD jails / Solaris Zones and Docker provided some tooling for configuring this (now fully subsumed by containerd / runc), for building tarballs (not really something that needs a big software stack), and for describing how different tarballs should be extracted and combined using overlay filesystems (useful, but should not be a large amount of code and now largely replaced by the OCI format and containerd). Their two valuable things were:

                                                • A proprietary build of a project that they released as open source that provided tooling for building container images.
                                                • A repository of published container images.

                                                The first of these is not actually more valuable than the open source version and is now quite crufty and so now has a load of competitors. The second is something that they tried to monetise, leaving them open to competitors who get their money from other things. Any cloud provider has an incentive to provide cheap or free container registries because a load of the people deploying the containers will be spending money to buy cloud resources to run them. Docker didn’t have any equivalent. Running a container registry is now a commodity offering and Docker doesn’t have anything valuable to couple their specific registry to that would make it more attractive.

                                                1. 9

                                                  I wrote a bit about that here – Docker also failed to compete with Heroku, under its former name dotCloud.

                                                  https://news.ycombinator.com/item?id=25330023

                                                  I don’t think the comparison to Google makes much sense. I mean Google has a totally different business that prints loads of money. If Docker were a subdivision of Google, it could lose money for 20 years and nobody would notice.

                                                  As for Red hat, this article has some interesting experiences:

                                                  Why There Will Never Be Another RedHat: The Economics Of Open Source

                                                  https://techcrunch.com/2014/02/13/please-dont-tell-me-you-want-to-be-the-next-red-hat/

                                                  To make matters worse, the more successful an open source project, the more large companies want to co-opt the code base. I experienced this first-hand as CEO at XenSource, where every major software and hardware company leveraged our code base with nearly zero revenue coming back to us. We had made the product so easy to use and so important, that we had out-engineered ourselves.

                                                  (Although I don’t think Docker did much engineering. It wasn’t that capable a product. It could have been 30 to 100 people at Google implementing it, etc. Previous thread: https://lobste.rs/s/kj6vtn/it_s_time_say_goodbye_docker)

                                                  1. 4

                                                    I appreciate the article on RedHat. It has certainly opened my eyes to the troubles with their business model, which I had admired in the past. (I suppose it is still admirable, but now at least I know why there aren’t more companies like it.)

                                                    The back half of the article is strange, though. I’m not sure what I’m supposed to learn about building a new business based around open source by looking at Microsoft, Amazon or Facebook. While they all contribute open source code now, they did not build their businesses by selling proprietary wrappers around open source products as far as I know. And given the enormity of those companies, it seems very hard to tell how feasible it would be to copy that behavior on a small scale. Github seems like a reasonable example of a company monetizing open source, however. It is at least clear that their primary business relies on maintaining git tools. I just wish the article included a few more examples of companies to look up to. Perhaps some lobsters have ideas.

                                                    1. 5

                                                      I just wish the article included a few more examples of companies to look up to

                                                      To a first approximation, there are no companies to look up to.

                                                      1. 2

                                                        I feel like some of the companies acquired by RedHat might be valid examples. I expect that the ones that are still recognizable as products being sold had a working model, but I don’t know what their earnings were like.

                                                      2. 3

                                                        the biggest ones I can think of, not mentioned, are mongo and elastic… redis may go public soon, there are lots of corps around data storage and indexing that to some extent keep their core product free. There might be more. If you look at interesting failures, going back to the early days, LinuxCare was a large service oriented company that had a giant flop, as did VA Linux (over a longer time scale):

                                                        linuxcare https://www.wsj.com/articles/SB955151887677940572

                                                        va linux https://www.channelfutures.com/open-source/open-source-history-the-spectacular-rise-and-fall-of-va-linux

                                                        1. 2

                                                          Appreciate it, thanks.

                                                    2. 8

                                                      same question, I think, could be asked why netflix succeeded but blockbuster failed, both were doing very similar thing. It seems that market success consists of chains / graphs of very small incremental decisions. The closer decisions are to the companies ‘pivot time’, the more impactful they seem to be.

                                                      And, at least in my observation, paying well and listening to well-rounded+experienced and risk-taking folks – who join your endeavor early, pays with huge dividends later on.

                                                      In my subjective view, docker failed to visualize and execute on the overall ecosystem around their core technology. Folks who seem to have that vision (but perhaps, not always the core technology) are the ones at hashicorp. They are not readhat by any means, but any one of their oss+freemium products seem to have good cohesive and ‘efficient’ vision around the ecosystem in this space. (where by ‘efficient’ I mean that they do not make too many expensive and user-base jarring missteps).

                                                      1. 1

                                                        could be asked why netflix succeeded but blockbuster failed, both were doing very similar thing

                                                        I’m not sure I agree. Coincidentally, there’s a YT channel that I follow that did a decent overview on both of them:

                                                      2. 3

                                                        My opinion on this is that both Google and Redhat are much closer to the cloud and the target market than Docker is/was.

                                                        Also, I thought that Docker was continuously trying to figure out how to make a net income. They had Docker Enterprise before it was sold off, but imo I’m not sure how they were aiming to bring in income. And a startup without income is destined to eventually close up.

                                                        1. 3

                                                          the question in my mind is… why did redhat and google succeed (as corporate entities) and docker not so much?

                                                          Curating a Linux distribution and keeping the security patches flowing seamlessly is hard work, which made Red Hat valuable. Indexing the entire Internet is also clearly a lot of hard work.

                                                          By comparison, what Docker is doing as a runtime environment is just not that difficult to replace.

                                                          1. 1

                                                            I kinda feel like this is the ding ding ding answer… when your project attempts to replicate a project going on inside of a BigCo, you will have a hard time preventing embrace and extend. Or perhaps, if you are doing that, keep your company small, w/ limited debt, because you may find a niche in the future, but you can’t beat the big teams at the enterprise game, let alone a federation of them.

                                                          2. 2

                                                            I think we all know our true desires we are just left to discover them.-

                                                            Lets not forget, The Docker Timeline:

                                                            • Started in 2013.
                                                            • Got open-source recognition.
                                                            • Got increased public use in 2015/2016.
                                                            • In 2017. project renamed from Docker to Moby. Mistake 1.
                                                            • In 2018. started requiring User Registration on DockerHub. Mistake 2.
                                                            • In 2019. Docker Database has been hacked which exposed user. Mistake 3.
                                                            • In 2020. Docker finally died and awaits new reborn. Good bye.

                                                            When I think about it, I’m not even mad. Hail death of Docker.

                                                          1. 6

                                                            I’m curious how does dockershim being removed from K8s leads to a conclusion that Docker Inc as a company is dying? As explained by many, Kubernetes team took that step to remove the bloat which was created by Docker in the codebase. But do you think people will go back to stop using docker CLI altogether and write 10 lines of bash script to spin up a new container, network etc? docker run is a UX layer on those containerd commands and I don’t see why people will stop using it just because K8s decided to remove the “dockershim” module. And how any of this has an affect on Docker Inc, that I’m still unable to understand AFAIK docker the CLI is open source and obv doesn’t generate any revenues for Docker Inc (which is what matters when we are talking about a company!)

                                                            1. 5

                                                              I think the reason that it points that direction is that there are multiple k8s providers and installers that default to the docker runtime (digitalocean managed k8s uses docker as the runtime, and kubespray defaults to it as well but also supports CRI-O). With dockershim going away where else will docker be used other than developers’ desktops?

                                                              Personally I bit the bullet was basically forced to switch to podman/buildah due to docker straight up not supporting Fedora 32+ due to the kernel change to cgroups v2. Docker Desktop for Mac/Windows is a nice product for running containers on those OS’ but my guess is that is the only place it will stay relevant. It’s easy enough to have a docker-compatible cli aliased to docker that doesn’t require the daemon on linux etc.

                                                              Also, with their attempts at monetizing DockerHub it kind of paints a “failing” aura over the company. If they can’t make money off of DockerHub how can they monetize a daemon that runs containers when there are many other equivalent solutions?

                                                              1. 1

                                                                multiple k8s providers and installers that default to the docker runtime (digitalocean managed k8s uses docker as the runtime, and kubespray defaults to it as well but also supports CRI-O). With dockershim going away where else will docker be used other than developers’ desktops?

                                                                SImilarly microk8s, k3s are using containerd since forever.

                                                                With dockershim going away where else will docker be used other than developers’ desktops?

                                                                Yep, exactly. It will be used by end developers just the way it is right now. I understand there are more lightweight alternatives for building images (esp something which doesn’t require you to run a local daemon) that are more lucrative. But not everyone runs K8s and I think there’s a large market out there for people running standard installations of their software just with docker/docker-compose :)

                                                                1. 2

                                                                  I think there’s a large market out there for people running standard installations of their software just with docker/docker-compose

                                                                  This is extremely true. I have many friends/colleagues who use docker-compose every day and there is no replacement for it yet without running some monster of a container orchestration system (compared to compose at least).

                                                                  I guess my main worry is that docker is a company based on products which they are having an extremely hard time monetizing (especially after they spun off their Docker EE division). I don’t see much of a future for Docker (the company) even if loads of developers use it on their desktops.

                                                                  1. 2

                                                                    docker compose was based on an acquihire of the folks that made fig.sh, then very little ever happened feature-wise. Super useful tool and if they’d been able to make it seamless with deployment (which is very hard it seems) the store might’ve been different.

                                                                  1. 1

                                                                    Yep, I appreciate that they finally made it available for Fedora 32 (after having to tweak kernel args), but many of us already switched to alternatives.

                                                                    They still don’t ship repos for Fedora 33 (the current release). After checking the GitHub issue related to supporting Fedora 33 it appears the repo is now live, even though it only contains containerd.

                                                              1. 3

                                                                This is very hypocritical considering Google’s own browsers is one of the worst offenders of User-Agent impersonation. This is a User-Agent you might see from it: “Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/74.0.3729.169 Chrome/74.0.3729.169 Safari/537.36”.

                                                                This might have additional unintended effects on mail clients as they may open an embedded browser for the OAuth2 dialog.

                                                                1. 8

                                                                  It would be nice that people remembered that you don’t have to leave scorched earth in your wake as you leave a project. It’s not a surprise to me that Xorg is begging for working hands when its active maintainers seem to repeatedly write blog posts telling people it’s deprecated.

                                                                  1. 6

                                                                    The main selling point of lobsters to me is the authored posts and interaction in the comments. I’d rather not see a blanket ban just because there’s some abuse of it.

                                                                    1. 1

                                                                      (Let’s read in to this way too much!)

                                                                      You don’t need to be that person to have a fun GitHub contribution graph.

                                                                      The point is a good one - GitHub contribution graphs can be overrated. However, contributor graphs do show a developer’s consistency. That’s important for any skill, and at any proficiency.

                                                                      1. 4

                                                                        That sounds like a really unconvincing metric. How do you tell if they did work that isn’t counted in GitHub’s metrics?

                                                                        1. 2

                                                                          I’m not very consistent so I’m glad I can employ software to convince others that I am.

                                                                          Related: https://twitter.com/catcarbn/status/1306244325995495426?s=20

                                                                        1. 5

                                                                          It’s important to keep in perspective that the backwards incompatibility issue with non-module builds that was patched into older releases was done so all the way back into the Go 1.9 release branch. Go 1.9 was originally released in August of 2017. I don’t know of anyone that is still using a Go release from back then, which is mostly a product of how well the Go team prioritizes not introducing breaking changes. The Go team also only provides security patches for the latest two version branches of Go (currently, 1.14 and 1.15), so nobody should be using a release that old regardless.

                                                                          1. 2

                                                                            It was painful from the developer perspective because your dependencies and deps-of-deps might have not had module support, and might already be a v2/v3 etc. (thus blocked you from sanely supporting non-module and module at the transition period)

                                                                            1. 4

                                                                              There was never a need for your dependencies, or their dependencies, to be or understand modules in order to consume them in a module build. They just worked. The only module “feature” that wouldn’t have been possible with this code is the inability to contain multiple major versions of a package, since that does rely on semantic import versioning (what this author is challenging).

                                                                          1. 17

                                                                            Oh wow, and just when I thought the go dependency mess couldn’t possible have gotten worse than ca ~2016…

                                                                            1. 11

                                                                              What on earth are they up to? I think this started with GOPATH, a bizarre fixed path that other languages are quite alright without. It’s just descended into an increasing mess since. There are some people with half a century plus of experience, yet this seeming aimlessness - what are they trying to achieve?

                                                                              1. 3

                                                                                what do you mean by a fixed path? don’t all languages need a location for their libraries?

                                                                                1. 6

                                                                                  Source code had to live in a particular path too. It shouldn’t be required now, but I recently had a cryptic code generation issue magically solved when I moved my code to ~/go/src/github.com/name/name.

                                                                            1. 4

                                                                              Damn that’s a low bounty. That’s pretty much the holy grail of vulnerabilities.

                                                                              1. 6

                                                                                Great! Looking forward to it!

                                                                                optimizations for SSDs,

                                                                                Always welcome, I didn’t do any research what this means exactly, let’s hope it is not just marketing :)

                                                                                The switch to Btrfs will use a single-partition disk layout, and Btrfs’ built-in volume management. The previous default layout placed constraints on disk usage that can be a difficult adjustment for novice users. Btrfs solves this problem by avoiding it.

                                                                                Neat. The default layout before (in Fedora) made / way too big, wasting a lot of space that could have been used in ${HOME}… especially on (smallish) SSDs.

                                                                                1. 5

                                                                                  optimizations for SSDs, Always welcome, I didn’t do any research what this means exactly, let’s hope it is not just marketing :)

                                                                                  Btrfs supports compression, as the article says. Enabling compression is expected to improve the SSD’s life span because you have fewer data to write. (the less you write, the less the disk wears out.) Of course, it may depend highly on your workload.

                                                                                  1. 1

                                                                                    For SSDs, are they completely random access? Or is there some benefit to storing related blocks close together ala on a spinning disk for sequential read?

                                                                                    1. 3

                                                                                      There’s some setup cost to select the block in which the sector resides, so accessing adjacent sectors is still faster than truly random accesses.

                                                                                      1. 2

                                                                                        If you look at SSD benchmarks, they do in fact give better throughput in sequential reads vs. random.

                                                                                    2. 3

                                                                                      I must be an outlier in finding their / very small. I use docker and had to move where it puts its images because Fedora made my / too tiny.

                                                                                      1. 3

                                                                                        I’m using VMs in GNOME Boxes and they are stored in ${HOME}/.local/share/gnome-boxes so the exact opposite :-D In either case it should be solved with btrfs…

                                                                                    1. 4

                                                                                      Can’t just change function names willy-nilly, that’d ruin the distribution of function names by length