Threads for kelp

  1. 3

    This is super cool, thanks for making this. I could very much see this being use for an AWS Lambda like service, or an alternative container orchestration system. Though I haven’t through through the implications of the networking limitations outlined in the libkrun README.

    1. 11

      Which is why Mozilla Firefox is such a breath of fresh air, as it uses much less of your CPU while still delivering a fast browsing experience. So feel free to have as many tabs open as you want in Firefox, your device will barely feel the effects.

      I use Firefox everyday but let’s be real here.

      I am rather tab-phobic but a few contemporary websites and one modern app like Figma puts my Firefox into a swapping tailspin after a couple hours of use. This may be better than Chrome, but it feels like the bad old days of thrashing your hard drive cache.

      To remedy this, it seems the developers decided to unload tabs from memory. It has made Firefox more stable, but page refreshes come surprisingly frequently.

      1. 16

        I am rather tab-phobic but a few contemporary websites and one modern app like Figma puts my Firefox into a swapping tailspin after a couple hours of use.

        I don’t entirely disagree but maybe some of the fault here lies on the engineers who decided that writing a vector graphics editor should be written as a web page – it would use several orders of magnitude fewer resources if it were simply a native executable using native graphics APIs.

        There’s only so much that browsers can do to save engineers from their own bad decisions.

        1. 6

          Figma being in browser is a product decision much more than an engineering decision. And being in browser is their competitive advantage.

          It means people can be up and running without installing a client. Seamless collaboration and sharing. These are huge differentiators compared to most things out there that require a native client.

          Yeah, I hate resource waste as much as the next person. But using web tech gives tools like Figma a huge advantage and head start for collaboration vs native apps. But yes, at some cost in client (browser) resources and waste.

          1. 1

            Figma being available in the browser is a competitive advantage, yes, in that it facilitates easy initial on-boarding and reduces friction for getting a stakeholder to look at a diagram.

            But there’s zero competitive advantage for Figma only being available as a browser app – once customers are in the ecosystem there’s every reason for the heavy Figma users to want a better performing native client, even while the crappier web app remains available for the new user or occasional-use stakeholder.

            Figma sort-of recognizes this – they do make a desktop version available for the heavy user, but it’s just the same old repackaged webapp garbage. And limiting themselves to just repackaging the web app is not a “competitive advantage” decision so much as an engineering decision to favour never having to learn anything new ever (once you’ve got the JavaScript hammer, everything’s a nail) over maybe having to learn some new languages and acquire some new skills, which used to be considered a norm in this industry instead of something for engineers to fear and avoid at all costs.

            1. 3

              I’m friends with an early engineer at Figma who architected a lot of the system and knows all the history.

              They says if they had done a cross platform native app, it would have been nearly impossible to get the rendering engine to get exactly the same result across platforms. Even for the web, they had to write their own font render.

              Yes, a native app could be faster, but it’s a major tradeoff, collaboration, networking and distribution features, security sandbox, much of that is just given to you by the browser. With a native app you have to build it all yourself.

              They started with a native app and ended up switching to web only. They also write Ruby, Go and Rust.

              And the Figma app is written in C++, wasm.

      1. 4

        I feel ashamed asking this, but can someone point me in a good direction for knowing why I would pick (or not pick) bsd over a standard Linux like say Debian ? My Google searches have miserably failed in providing a decent unbiased/not marketing/gp3-generated recap.

        1. 11

          Keep in mind, BSD is not one thing. They all have a common lineage, but FreeBSD and NetBSD both split from 386BSD around 1993. OpenBSD split from NetBSD in 1995, and DragonFlyBSD split from FreeBSD in 2003.

          The first BSD release was in 1978, it was 15 years (1993) later when NetBSD and FreeBSD diverged, and it’s been 29 years since then. So there has been almost twice as much time since the BSDs went their separate ways.

          They each have their own philosophies and priorities, so it’s about which one aligns with yours.

          But I think there are a few things that tie them together.

          1. A base system that is a full operating system, user land and kernel, all developed in a single code base. This is the biggest difference with Linux, IMO. And it can mean a much more cohesive feeling system. And IMO BSD man pages are of higher quality than on Linux. You can have the whole source of the base system sitting on that system.

          2. A ports system that is fundamentally based on Makefiles and compiling from source. However they all now have pre-built binary packages that can be installed. That was not always the case, it used to be you always had to build ports from source.

          My own take on their differences from each other:

          FreeBSD still cares the most about being a good server and does have some big high scale use at places like Netflix, and was used heavily at Yahoo when they were still relevant. FreeBSD tends to maybe be the more pragmatic of the group, but that also can mean it’s a bit more messy. There is sometimes more than one way to do the same thing, even in the base system. They have advanced features like ZFS, and Bhyve for VMs. This can make for a pretty powerful hypervisor. This is here I use FreeBSD. FreeBSD probably has the most users of them all.

          OpenBSD tends to be my favorite. Some of their development practices can seem esoteric. To get a patch included, you mail a diff to their mailing lists, and they still use CVS. They care a lot about security and do a lot of innovation in that area. They care less about things like backwards compatibility, often breaking their ABI between releases. Their developers use OpenBSD as daily drivers, so if you run OpenBSD on a laptop that is used by the right developers, pretty much everything will just work. Their manpages are excellent, and if you take the time to read them you can often figure out how to do most things you need. There is typically only one way to do a thing, and they tend to aggressively remove code that isn’t well maintained. Like OpenBSD doesn’t support bluetooth because that code didn’t work well enough, and no one wanted to fix it. So they just removed it. By modern standards OpenBSD has pretty old filesystems, you’ll need to fsck on a crash, and their multi-processor support and performance still lags far behind FreeBSD or Linux. I generally find that OpenBSD feels substantially slower than Linux when run on the same laptop.

          NetBSD I haven’t used in a LONG time. But for ages their primary goal was portability. So they tended to run on many different types of hardware. I’m not sure if they have enough developers these days to keep a huge list of supported hardware though. They currently list 9 tier 1 architectures, where OpenBSD has 13. I think NetBSD still tends to be more used by academics.

          DragonFlyBSD I’ve never actually installed, but I remember the drama when Matt Dillon split from FreeBSD in 2003. Their main claim to fame is the HAMMER2 filesystem and a different approach to SMP from what FreeBSD was trying to do in the move from FreeBSD 4.0 to FreeBSD 5.0 (~2003)

          With all of the BSDs you’re going to have a little bit less software that works on it, though most things will be found in their ports collection. You’ll probably have a more cohesive system, but all the BSDs combined have a small fraction of the developers that work on just the Linux kernel.

          1. 1

            At least the last time I ran FreeBSD there were at least 2 different ways of keeping ports up to date, both which were confusing and under-documented. Maybe the situation is better now.

          2. 7

            I think it’s mostly a matter of personal preference. Here’s a list of reasons openbsd rocks: https://why-openbsd.rocks/fact/ but for me, I prefer the consistency over time of OpenBSD, the fact that my personal workflows haven’t significantly changed in 15 years, and that the system seems to get faster with age (up to a point). Also, installing and upgrading are super easy.

            1. 4

              Long time OpenBSD developer here. I think @kelp’s reply is mostly accurate and as objective as possible for such an informal discussion.

              I will add a short personal anecdote to it. As he says, all my machines were running OpenBSD before the pandemic. In the past I kept my online meetings on my phone because OpenBSD is not yet equipped for that.

              Being a professor at the university, this new context meant that I had to also hold my courses online. This is more complicated than a plain online meeting so I to had to switch back to Linux after more than 15 years.

              The experience on Linux, production wise, has been so good that I switched all my machines over. Except my home server. I don’t mean just online video meetings and teaching, but also doing paperwork, system administration (not professionally, just my set of machines and part of the faculty infrastructure), and most importantly running my numerical simulations for research.

              Now that we are back to normal over here, I could switch back to my old setup but I am finding it really hard to convince my new self.

              This is just a personal experience that I tried to report as objectively as possible.

              On a more opinionated note, I think the trouble with BSDs is that there is no new blood coming, no new direction. Most of them are just catching up on Linux which is a hard effort involving a lot of people from the projects. It is very rare to find something truly innovative coming from here (think about something that the other projects would be rushing to pull over and integrate, just like the BSDs are doing with Linux).

              If nothing happens the gap will just widen.

              From my porting experience I can tell you that most open source userland programs are not even considering the BSDs. They assume, with no malevolence, that Linux will be the only target. There are Linuxisms everywhere that we have to patch around or adapt our libc to.

              To conclude, in my opinion, if you want to study and understand operating systems go with the BSDs, read their source, contribute to their projects. Everything is very well written and documented unlike Linux which is a mess and a very poor learning material. If you just want to use it for your day to day activities and you want an open source environment, then go with mainstream.

            1. 15

              AWS’ basic model is to charge very, very high sticker prices, and then make deals to discount them aggressively for customers who can negotiate (or for startups, or for spot instances, etc). GCP mostly charges sticker prices. I’m sure they would like to get to an AWS-like model, but they’re still pretty small and don’t have that much market power yet.

              1. 18

                This is one of my least favorite qualities of AWS, but it is a really important discussion point for cloud pricing. No customer of significant volume is paying sticker price for AWS services. GCP is looking for names to give discounts to so they can put you in their marketing. AWS will give discounts to just about anybody with more than $100k in annual cloud spend (and also put you in their marketing). Not sure where Azure falls on the pricing negotiation spectrum.

                1. 30

                  It’s frustrating since one of the original promises of cloud was simple, transparent pricing. It hasn’t been that way for at least 5 years though.

                  1. 14

                    It’s actually been quite funny to see everything come full circle. A la carte pricing was a huge original selling point for cloud. Pay for what you use was seen as much more transparent, but that’s proven not to be the case since most orgs have no clue how much they use. Seeing more and more services pop up with flat monthly charges and how that’s now being claimed as more transparent than pay-as-you-go pricing has been an amusing 180.

                    1. 4

                      It’s better than the status quo before where everything was about getting on the phone with a sales rep and then a sales engineer and you had no idea what other companies/netops were getting unless you talked to them. But that’s not saying it’s a good situation. I wonder if there’s room for a cloud provider that is actually upfront about their costs with none of the behind-the-scenes negotiation silliness, but I’m hard-pressed to find how that would earn them money unless they either charge absurd prices for egress bandwidth or they end up hosting a unicorn which brings in some serious revenue.

                  2. 3

                    If you’re spending millions a year with GCP you can get discounts on various things. Especially if you’re spending millions per year with AWS and are willing to move millions of that to GCP and can show fast growth in spend.

                    I’ve also seen 90% (yes 90%) discounts on GCP egress charges. But not sure if they are now backing away from that.

                    As the article points out, AWS gouges you an insane amount on egress and are quite unwilling to discount it. I have seen some discounts on cross AZ traffic costs though.

                    1. 3

                      There are definitely some paying special price for GCP. You gotta be pretty big.

                      1. 2

                        And they’ll never get there, given that the entire company is built around the goal of never actually talking to customers. Goes against the grain of manual discounting.

                      1. 14

                        Is there any evidence at all that more efficient languages do anything other than induce additional demand, similar to adding more lanes to a highway? As much as I value Rust, I quickly became highly skeptical of the claims that started bouncing around the community pretty early on around efficiency somehow translating to meaningful high-level sustainability metrics. Having been privy to a number of internal usage studies at various large companies, I haven’t encountered a single case of an otherwise healthy company translating increased efficiency into actually lower aggregate energy usage.

                        If AWS actually started using fewer servers, and Rust’s CPU efficiency could be shown to meaningfully contribute to that, this would be interesting. If AWS continues to use more and more servers every year, this is just some greenwashing propaganda and they are ultimately contributing to the likelihood of us having a severe population collapse in the next century more like the BAU2 model than merely a massive but softer population decline along the lines of the CT model. We are exceedingly unlikely to sustain population levels. The main question is: do we keep accepting companies like Amazon’s growth that is making sudden, catastrophic population loss much more likely?

                        1. 5

                          We’ve always had the Gates’ law offsetting the Moore’s law. That’s why computers don’t boot in a millisecond, and keyboard to screen latency is often worse than it was in the ‘80s.

                          But the silver lining is that with a more efficient language we can get more useful work done for the same energy. We will use all of the energy, maybe even more (Jevon’s Paradox), but at least it will be spent on something else than garbage collection or dynamic type checks.

                          1. 4

                            I can tell you that I was part of an effort to rewrite a decent chunk of code from Python to C++, then to CUDA, to extract more performance when porting software from a high-power x86 device to a low-power ARM one. So the use case exists. This was definitely not in the server space though, I would love to hear the answer to this in a more general way.

                            I’m not going to try to extrapolate Rust’s performance into population dynamics, but I agree with the starting point that AWS seems unlikely to encourage anything that results in them selling fewer products. But on the flip side if they keep the same number of physical servers but can sell more VM’s because those VM’s are more lightly loaded running Rust services than Python ones, then everyone wins.

                            1. 3

                              I’ve spent a big portion of the last 3+ years of my career working on cloud cost efficiency. Any time we cut cloud costs, we are increasing the business margins, and when we do that, the business wants to monitor and ensure we hold into those savings and increased margins.

                              If you make your application more energy efficient, by what ever means, it’s also probably going to be more cost efficient. And the finance team is really going to want to hold onto those savings. So that is the counter balance against the induced demand that you’re worried about.

                            1. 12

                              One way to think about Kubernetes is that it is an attempt by AWS’ competitors to provide shared common higher-level services, so that AWS has less ability to lock-in its customers into its proprietary versions of these services.

                              It’s not unlike how in the 90s, all the Unix vendors teamed up to share many components, so they could compete effectively against Windows.

                              1. 6

                                Yeah I agree. I just don’t think Kubernetes is actually very good. It’s operationally very complex and each of the big 3 providers have their own quirks and limits with their managed k8s service. At my previous employer we were responsible for many hundreds of k8s clusters, and had a team of 10 to keep k8s happy and add enough additional automation to keep it all running. The team probably needed to be twice that to really keep up.

                                I keep wondering if there is an opportunity to make something better in the same area. Hashcorp is trying with Nomad. Though I don’t have any direct experience with Nomad to know if they are succeeding in making a better alternative. It integrates with the rest of their ecosystem, but separates concerns. Vault for secrets management and Consul for service discovery and stuff.

                                1. 4

                                  This sounds like progress! OpenStack was bad, K8s is not very good, maybe a new contender will be acceptable, verging on decent. ;)

                                  1. 1

                                    I wish I could upvote this multiple times. (openstack flashback intensifies).

                                    More seriously, from what I heard k8s really seems to be a lot easier to handle than OpenStack and its contemporaries. We had a very small team and at times it felt like we needed 6 out of 12 people (in the whole tech department of the company) just to keep our infra running. I’ve not heard such horror stories with k8s.

                                    1. 1

                                      I want to know why nomad isn’t that “acceptable verging on decent” list

                                      1. 1

                                        I don’t know anything about nomad.

                                  2. 4

                                    That was sorta how I thought about OpenStack, but I get the impression that software wasn’t really good enough to run in production and resultingly fizzled out.

                                    Not quite the same though because OpenStack was trying to be an open-source thing at the same level as EC2 + ELB + EBS, rather than at a higher level?

                                    1. 3

                                      Now, I never actually deployed openstack, so I may not know what I’m talking about. But I always got the impression that Openstack was what you got when you had a committee made up of a bunch of large hardware vendors looking after their own interests. The result being fairly low quality, and high complexity.

                                      1. 2

                                        I didn’t personally either but I saw someone try and just bail out after, like, a week or two.

                                        1. 2

                                          I actually saw someone put significant resources into getting an Openstack installation to work. It was months and months for a single person, and the end result wasn’t extremely stable. It could have been made good enough with more people, but unfortunately at the same time, AWS with all its offerings was much much easier.

                                          Kubernetes seems like a marginally better design and implementation of the same architectural pattern: the vendor-neutral cluster monster.

                                          1. 1

                                            The problem usually was that you wanted some of this compartmentalization (and VMs) on premise, that’s why AWS was out. In our case we simply needed to be locally available because of special hardware in the DC. We thought about going into the cloud (and partly were), but in the end we still needed so much stuff locally that OpenStack was feasible (and not even Docker wasn’t, because of Multicast and a few other things iirc)

                                  1. 2

                                    I have the ThinkPad P1 Gen 3 with 4K screen, Intel i9-10885H, and Quadro T2000 Max-Q. It’s basically the same laptop as this review, but a Quadro instead of GeForce GPU. It’s basically maxed out across the board and I even added a 2nd SSD. Feels great to use, but battery life is not great, it requires a special charger with a special port to charge. Doing just about anything with it makes it warm / hot and the fans spool up quite loud. This happens in both Linux and Windows.

                                    I also have a MacBook Air with an M1. It doesn’t even have a fan, hardly ever gets even warm, beats the Thinkpad on all but the GPU portion of Geekbench. And feels subjectively faster at almost everything, the battery lasts all day, charges fast on standard USB-C (doesn’t have to be a huge wattage charger) and the laptop speakers sound better.

                                    I prefer the ThinkPad screen slightly, especially since it’s 2” larger. The ThinkPad keyboard is a bit nicer, but the MacBook Air keyboard is much improved over the abomination that Apple used to ship. My hatred for those keyboard was what got me on the ThinkPad train.

                                    I end up using the MacBook Air FAR FAR more, even though maybe I prefer Linux a little over macOS.

                                    When Apple ships a 14” or 16” MacBook Pro with >= 32GB of RAM it’s going be really hard to keep me using a Thinkpad for anything other than a bit of tinkering with Linux or OpenBSD (I also have a X1 Carbon Gen7 for OpenBSD).

                                    1. 1

                                      If you’re on OpenBSD and this is biting you, I guess the fix would be to patch the port to push it up to at least 1.8.1 which has this fixed upstream. Or, you can just build it yourself and use that version instead.

                                      Has Rachel submitted a patch to the ports? Seems like she hasn’t, and I don’t blame her. The OpenBSD project sets a high bar for contribution, in means of tolerance towards user-hostile tooling. Contributing to open source can be far more tiresome than fixing it for yourself, and I found OpenBSD even more taxing than other projects.

                                      It makes me sad as this phenomenon is one of the reasons classical open source made by the people for the people is dying, and large companies take over opensource with their PR-open sourced projects.

                                      1. 1

                                        Has Rachel submitted a patch to the ports? Seems like she hasn’t, and I don’t blame her.

                                        Don’t hold your breath. (That said, I’d happily to contribute to most projects, and OpenBSD’s process would require significantly more effort on my part.)

                                        1. 1

                                          Asking you and the parent comment.

                                          What is it about the OpenBSD process that you feel makes it so hard?

                                          It’s a bit harder than sending a PR on GitHub. And the quality expectations are high, so you need to read the docs and get an understanding of the process.

                                          But when I contributed some things to OpenBSD ports (updating versions in an existing port and it’s deps) I found everyone I interacted with to be very helpful, even when I was making dumb mistakes.

                                          1. 2
                                            • no easily searchable bug database with publicly available status discussion to know if anybody is working on it, what work and maybe dead-ends were hit. No, a mailing list is not a proper substitute for this.
                                            • everything is done in email with arcane formatting requirements.
                                            • the whole tooling is arcane to contemporary users. (CVS, specifically formatted email, etc)

                                            I have done my BSD contributions in the past when I had more time and willingness to go the extra mile for the sake of others. I no longer wish to use painful tools and workflows for the sake of others’ resistance to change. It is an extra burden.

                                            Don’t get me wrong, this is not only about OpenBSD. The same goes for Fedora for example, they have their own arcane tooling and processes, and so do lots of other projects. They have tools and documentation, but lots of docs are outdated, and for those not being constantly in the treadmill these are a lot of extra research and work which is “helped” by outdated docs etc. and it is a giant hill to climb to publish an update of a version number and re-run of the build script.

                                            1. 1

                                              Thanks, this is a good answer.

                                              It was nice to see Debian move to a Gitlab instance, and nice to see FreeBSD is finally moving to Git.

                                              But I suspect not much is going to change with OpenBSD, though maybe Got will improve things at some point.

                                          2. 1

                                            Oh. Now this is totally a different reason from what I was thinking about. (I personally don’t agree to her on this one. Still I don’t blame her, even if her different “political”/cultural stance would be her sole reason. People must accept that this is also a freedom of open source users.)

                                            Recently I have made up my mind to once again contribute more than I did in the past few years, and while my PRs were accepted, some still didn’t make it to a release, and the project has no testing release branch (which I also understand for a tiny project), thus compiling your own fork makes sense even this way. And this way contributing stuff stuff often gets left behind in the daily grind. On the other hand some other tiny contributions were accepted with such warmth and so quick response time that it felt really good.

                                        1. 52

                                          Over the past few years of my career, I was responsible for over $20M/year in physical infra spend. Colocation, network backbone, etc. And then 2 companies that were 100% cloud with over $20M/year in spend.

                                          When I was doing the physical infra, my team was managing roughly 75 racks of servers in 4 US datacenters, 2 on each cost, and an N+2 network backbone connecting them together. That roughly $20M/year counts both OpEx and CapEx, but not engineering costs. I haven’t done this in about 3 years, but for 6+ years in a row, I’d model out the physical infra costs vs AWS prices, at 3 year reserved pricing. Our infra always came out about 40% cheaper than buying from AWS for as apples to apples as I could get. Now I would model this with savings plan, and probably bake in some of what I know about the discounts you can get when you’re willing to sign a multi-year commit.

                                          That said, cost is not the only factor. Now bear in mind, my perspective is not 1 server, or 1 instance. It’s single-digit thousands. But here are a few tradeoffs to consider:

                                          1. Do you have the staff / skillset to manage physical datacenters and a network? In my experience you don’t need a huge team to be successful at this. I think I could do the above $20M/year, 75 rack scale, with 4-8 of the right people. Maybe even less. But you do have to be able to hire and retain those people. We also ended up having 1-2 people who did nothing but vendor management and logistics.

                                          2. Is your workload predictable? This is a key consideration. If you have a steady or highly predictable workload, owning your own equipment is almost always more cost-effective, even when considering that 4-8 person team you need to operate it at the scale I’ve done it at. But if you need new servers in a hurry, well, you basically can’t get them. It takes 6-8 weeks to get a rack built and then you have to have it shipped, installed, bolted down etc. All this takes scheduling and logistics. So you have to do substantial planning. That said, these days I also regularly run into issues where the big 3 cloud providers don’t have the gear either, and we have to work directly with them for capacity planning. So this problem doesn’t go away completely, once your scale is substantial enough it gets worse again, even with Cloud.

                                          If your workload is NOT predictable, or you have crazy fast growth. Deploying mostly or all cloud can make huge sense. Your tradeoff is you pay more, but you get a lot of agility for the privilege.

                                          1. Network costs are absolutely egregious on the cloud. Especially AWS. I’m not talking about a 2x, or 10x, markup. By my last estimate, AWS marks up their egress costs by roughly 200-300x their costs! This is based on my estimates of what it would take to buy the network transit and routers/switches you’d need to egress a handful of Gbps. I’m sure this is an intentional lockin strategy on their part. That said, I have heard rumors of quite deep discounts on the network if you spend enough $$$. We’re talking 3 digits million multi-year commits to get the really good discounts.

                                          2. My final point, and a major downside of cloud deployments, combined with a Service Ownership / DevOps model, is you can see your cloud costs grow to insane levels due to simple waste. Many engineering teams just don’t think about the costs. The Cloud makes lots of things seem “free” from a friction standpoint. So it’s very very easy to have a ton of resources running, racking up the bill. And then a lot of work to claw that back. You either need a set of gatekeepers, which I don’t love, because that ends up looking like an Ops team. Or you have to build a team to build cost visibility and attribution.

                                          On the physical infra side, people are forced to plan, forced to come ask for servers. And when the next set of racks aren’t arriving for 6 weeks, they have to get creative and find ways to squeeze more performance out of their existing applications. This can lead to more efficient use of infra. In the cloud world, just turn up more instances, and move on. The bill doesn’t come until next month.

                                          Lots of other thoughts in this area, but this got long already.

                                          As an aside, for my personal projects, I mostly do OVH dedicated servers. Cheap and they work well. Though their management console leaves much to be desired.

                                          1. 2

                                            The thing about no code of conduct being a benefit seems to come up somewhat regularly. I even see it show up on the OpenBSD lists. But this is really just a function of community size. A small enough community can be self governing with implicit social norms.

                                            But once it gets large enough, the possibility rises that you’ll have too many bad actors, so you need to start making the norms explicit. This is why you see a code of conduct in FreeBSD, the community is larger.

                                            1. 7

                                              Code of Conducts aren’t exhaustive lists of what is allowed, and not even exhaustive lists of what is not allowed. They provide a bunch of guidelines but in the end, they need to be filled with life through enforcement action that usually goes beyond the scope of what is written down (and that’s where the bickering about CoCs starts: is any given activity part of one of the forbidden actions or not?) - which makes the actual social norms implicit again.

                                              The main signal a CoC provides is that the community is willing to enforce some kind of standard, which is a useful signal. There are communities that explicitly avoid any kind of enforcement, and there are communities that demonstrate that willingness through means other than CoCs.

                                              1. 5

                                                I don’t automatically assume that a community without a CoC is not willing to enforce a minimal standard of decency. If I were to insult a maintainer, a co-contributor or bug-reporter, I wouldn’t be surprised to experience repercussions. Do others assume that because there’s no formal document, that you can just say whatever you want?

                                                Either way, it’s off-topic.

                                                1. 4

                                                  Not being exhaustive is actually what is great about Code of Conducts. One of the interesting things about moderating online communities is that the more specific and defined your rules for participation are, the more room bad actors have to argue with you and cause trouble.

                                                  If the rules for your website are extremely specific, bad actors will try to poke holes in that logic, find loopholes, and generally argue the details of the rules. However, if your rule for participation is simply “don’t be an asshole”, then you have a lot more room as a moderator to deal with bad actors without getting into the weeds about the specifics.

                                                  The Tildes Code of Conduct is really great for moderating an online community, because it’s simple and vague enough for almost everyone to understand, but does not leave any footing for bad actors to try to argue that they didn’t technically break the rules.

                                                  I think Code of Conducts are great, and honestly, most of the people I encounter who are against them tend to be… not pleasant to collaborate with.

                                                  Regarding bickering about forbidden actions:

                                                  Shut it down. If you are a moderator or maintainer and someone breaks the rules, ban them. If someone causes a stink about it, warn them, and then ban them too if necessary.

                                                  I think online communities, especially large online communities, seem to be afflicted with this idea that people on the Internet have a right to be heard and to participate. That isn’t true. Operators of these communities are not and should not be beholden to anyone. If someone continuously makes the experience worse for others and refuses to do better, ban them and be done with it.

                                                  1. 2

                                                    From a POSIWID perspective, the things I have observed lead me to conclude that the purpose of CoCs (in business, opensource, and other community organisations) is to install additional levers that may only be operated by politically-powerful people, and provide little-to-no protection for the people they claim to protect. I have seen people booted from projects despite admission by the admins that no CoC violation occurred, and I have seen people close ranks around politically-powerful people who remain protected despite violating organisation/project/event CoCs.

                                                  2. 3

                                                    You seem to be equating a code of conduct with a willingness to ban bad actors. I think that’s a false equivalence.

                                                    1. 2

                                                      That was not my point. My point was that the need for a code of conduct is often due to community size. Smaller communities can be more self policing based on implicit norms. They certainly can and do ban or drive off bad actors

                                                  1. 20

                                                    TL;DR didn’t sanitize usernames which could contain “-“ making them parse as options to the authentication program. Exploiting this, username “-schallenge:passwd” allowed silent auth bypass because the passwd backend doesn’t require a challenge.

                                                    Awesome find, great turnaround from Theo.

                                                    1. 2

                                                      Yikes.

                                                      It’s a modern marvel that people end up using web frameworks with automatic user data parsing and escaping for their websites, because if not so many places would have these kind of “game over” scenarios.

                                                      1. 5

                                                        Usernames in web applications are not easy, nor is there wide awareness of the problems or deployment of solutions.

                                                        If you’re interested in learning more, I’ve gone on about this at some length.

                                                      2. 1

                                                        If memory serves right, there was an old login bug (circa ’99) that was the same sort of thing:

                                                        http://seclab.cs.ucdavis.edu/projects/testing/vulner/18.html

                                                        Edit: https://lobste.rs/s/bufolq/authentication_vulnerabilities#c_jt9ckw

                                                        Too slow I guess :)

                                                        1. 1

                                                          Is this specific to OpenWall users or is it applicable to OpenBSD in general?
                                                          From title it looks like an authentication vulnerability in the OpenBSD core os.

                                                          1. 1

                                                            This is OpenBSD in general.

                                                          1. 6

                                                            I really liked this. Particularly the points about maintenance being just as important as building something new. It’s nice to see their philosophy articulated and how the Neovim team has put it in action. I loved the call out about fixing an issue being a O(1) cost while they impact is O(N*M) over all the users it reaches.

                                                            Very much looking forward to seeing their roadmap realized.

                                                            1. 1

                                                              I’ve been using this for the last several months. Switched over from a homeshick setup. I’m liking it now.

                                                              My repo with a README showing my simple workflow: https://github.com/kelp/dotfiles

                                                              1. 23

                                                                I think Josh addresses a good point here: systemd provides features that distributions want, but other init systems are actively calling non-features. That’s a classic culture clash, and it shows in the systemd debates - people hate it or love it (FWIW, I love it). I’m also highly sympathetic to systemd’s approach of shipping a software suite.

                                                                Still, it’s always important to have a way out of a component. But the problem here seems to be that the scope of an init system is ill-defined and there’s fundamentally different ideas where the Linux world should move. systemd moves away from the “kernel with rather free userspace on top” model, others don’t agree.

                                                                1. 17

                                                                  Since systemd is Linux-only, no one who wants to be portable to, say, BSD (which I think includes a lot of people) can depend on its features anyway.

                                                                  1. 12

                                                                    Which is why I wrote “Linux world” and not “Unix world”.

                                                                    systemd has a vision for Linux only and I’m okay with that. It’s culture clashing, I agree.

                                                                    1. 6

                                                                      What I find so confusing - and please know this comes from a “BSD guy” and a place of admitted ignorance - is that it seems obvious the natural conclusion of these greater processes must be that “Linux” is eventually something closer to a complete operating system (not a bazaar of GNU/Linux distributions). This seems to be explicitly the point.

                                                                      Not only am I making no value judgement on that outcome, but I already live in that world of coherent design and personally prefer it. I just find it baffling to watch distributions marching themselves towards it.

                                                                      1. 6

                                                                        But it does create a monoculture. What if you want to run service x on BSD or Redox or Haiku. A lot of Linux tools can be compiled on those operating systems with a little work, sometimes for free. If we start seeing hard dependencies on systemd, you’re also hurting new-OS development. Your service wont’ be able to run in an Alpine docker container either, or on distributions like Void Linux, or default Gentoo (although Gentoo does have a systemd option; it too is in the mess of supporting both init systems).

                                                                        1. 7

                                                                          We’ve had wildly divergent Unix and Unix-like systems for years. Haiku and Mac OS have no native X11. BSDs and System V have different init systems, OpenBSD has extended libc for security reasons. Many System V based OSes (looking at you, AIX) take POSIX to malicious compliance levels. What do you think ./configure is supposed to do if not but cope with this reality?

                                                                      2. 2

                                                                        Has anyone considered or proposed something like systemd’s feature set but portable to more than just linux? Are BSD distros content with SysV-style init?

                                                                        1. 11

                                                                          A couple of pedantic nits. BSDs aren’t distros. They are each district operating systems that share a common lineage. Some code and ideas are shared back and forth, but the big 3, FreeBSD, NetBSD and OpenBSD diverged in the 90s. 1BSD was released in 1978. FreeBSD and NetBSD forked from 386BSD in 1993. OpenBSD from NetBSD in 1995. So that’s about 15 years, give or take, of BSD before the modern BSDs forked.

                                                                          Since then there has been 26 years of separate evolution.

                                                                          The BSDs also use BSD init, so it’s different from SysV-style. There is a brief overview here: https://en.m.wikipedia.org/wiki/Init#Research_Unix-style/BSD-style

                                                                          1. 2

                                                                            I think the answer to that is yes and no. Maybe the closets would be (open) solaris smf. Or maybe GNU Shepherd or runit/daemontools.

                                                                            But IMNHO there are no good arguments for the sprawl/feature creep of systemd - and people haven’t tried to copy it, because it’s flawed.

                                                                        2. 6

                                                                          It’s true that systemd is comparatively featureful, and I’ll extend your notion of shipping a software suite by justifying some of its expansion into other aspects of system management in terms of it unifying a number of different concerns that are pretty coupled in practice.

                                                                          But, and because of how this topic often goes, I feel compelled to provide the disclaimer that I mostly find systemd just fine to use on a daily basis: as I see it, the problem, though, isn’t that it moves away from the “free userspace” model, but that its expansion into other areas seems governed more by political than by technical concerns, and with that comes the problem that there’s an incentive to add extra friction to having a way out. I understand that there’s a lot of spurious enmity directed at Poettering, but I think the blatant contempt he’s shown towards maintaining conventions when there’s no cost in doing so or even just sneering at simple bug reports is good evidence that there’s a sort of embattled conqueror’s mindset underlying the project at its highest levels. systemd the software is mostly fine, but the ideological trajectory guiding it really worries me.

                                                                          1. 1

                                                                            I’m also highly sympathetic to systemd’s approach of shipping a software suite.

                                                                            What do you mean here? Bulling all distro maintainers until they are forced to setup your software as default, up to the point of provoking the suicide of people who don’t want to? That’s quite a heavy sarcasm you are using here.

                                                                            1. 12

                                                                              up to the point of provoking the suicide of people who don’t want to

                                                                              Link?

                                                                              1. 25

                                                                                How was anyone bullied into running systemd? For Arch Linux this meant we no longer had to maintain initscripts anymore and could rely on systemd service files which are a lot nicer. In the end it saved us work and that’s exactly what systemd tries to be a toolkit for initscripts and related system critical services and now also unifying Linux distro’s.

                                                                                1. 0

                                                                                  huh? Red Hat and Poettering strongarmed distribution after distribution and stuffed the debian developer ballots. This is all a matter of the public record.

                                                                                  1. 10

                                                                                    stuffed the debian developer ballots

                                                                                    Link? This is the first time I am hearing about it.

                                                                                    1. 5

                                                                                      I’m also confused, I followed the Debian process, and found it very through and good. The documents coming out of it are still a great reference.

                                                                                2. 2

                                                                                  I don’t think skade intended to be sarcastic or combative. I personally have some gripes with systemd, but I’m curious about that quote as well.

                                                                                  I read the quote as being sympathetic towards a more unified init system. Linux sometimes suffers from having too many options (a reason I like BSD). But I’m not sure if that was the point being made

                                                                                  Edit: grammar

                                                                                  1. 5

                                                                                    I value pieces that are intended to work well together and come from the same team, even if they are separate parts. systemd provides that. systemd has a vision and is also very active in making it happen. I highly respect that.

                                                                                    I also have gripes with systemd, but in general like to use it. But as long as no other project with an attitude to move the world away from systemd by being better and also by being better at convincing people, I’ll stick with it.

                                                                                  2. 2

                                                                                    I interpreted it as having fewer edges where you don’t have control. Similar situations happen with omnibus packages that ship all dependencies and the idea of Docker/containers. It makes it more monolithic, but easier to not have to integrate with every logging system or mail system.

                                                                                    If your philosophy of Linux is Legos, you probably feel limited by this. If you philosophy is platform, then this probably frees you. If the constraints are doable, they often prevent subtle mistakes.

                                                                                1. 2

                                                                                  I love this question.

                                                                                  I iterated on this for quite a while but over the last several years I’ve settled on this.

                                                                                  Setup:

                                                                                  Plugins:

                                                                                  • I hardly have a system.
                                                                                  • I write TODO, the date and a square box for a TODO list
                                                                                  • I write a descriptive heading, and use some bullet points for writing down some ideas.

                                                                                  The leather notebook has quite the patina now from carry it around for the last few years. And I’m a big fan of the Doan paper grid lines notepads. Higher quality paper than fieldnotes, but the same form factor. And I enjoy the grid lines pattern.

                                                                                  I also keep some of the Doan paper writing pads on my desk at work and home for random disposable notes, like a daily TODO, and working out ideas.

                                                                                  1. 19

                                                                                    I’ve started to appreciate this perspective recently. It’s easy to get carried away with always digging deeper to learn the next lowest level. This inevitably leads to finding new, ugly problems with each new level. If you’re anything like me, you’ll constantly feel the urge to rewrite the whole world. It’s not good enough until I develop my own CPU/OS/programming language/UI toolkit/computer environment thing. This is also the problem with learning things like Lisp and Haskell (which are “low-level” in a theoretical sense).

                                                                                    At some point, you have to accept that everything is flawed, and if you’re going to build something that real people will use, you have to pick a stack, accept the problems, and start coding. Perfect is the enemy of good after all.

                                                                                    But there is still value in learning low-level languages, and the author may have gone too far in his criticisms. In high school, I learned C and decided the whole world needed to be rewritten in C. I wrote parts of games, kernels, compilers, and interpreters, and learned a lot. My projects could have been more focused. I could have chosen more “pragmatic” languages, and maybe built software that actually got used by a real end-user. Still, there were a few lessons I learned.

                                                                                    First, C taught me how little library support you actually need to build usable software. To make an extreme comparison, this is totally at odds with how most JavaScript developers work. Most of my C projects required nothing but the stdlib, and maybe a library for drawing stuff to the screen. Sure, this meant I ended up writing a lot of utility functions myself, but it can be pretty freeing to realize how few lines of code you need to build highly interactive and useful applications. C tends to dissuade crazy abstractions (and thus external libraries) because of the limits and lack of safety in the language. This forces you to write everything yourself and to understand more of the system than you would have had you used an external library.

                                                                                    The corollary is recognizing how difficult things were in “the old days” when components were less composable and unsafe languages abounded. We have it good. Sure, software still sucks, and things are worse now in certain ways because of multithreading and distributed systems, but at least writing code that does crazy string manipulation is easy now[1]

                                                                                    The other value of learning low-level programming is that it does come in handy 1% of the time when the abstraction leaks and something breaks at a lower level. In such a situation, rather than being petrified and reverting to jiggling the system and hoping it’ll start working again, you roll up your sleeves, crack out gdb and get to work debugging the segfault that’s happening in the JVM because somebody probably missed an edge case in the FFI call to an external library. It’s handy to be able to do this[2].

                                                                                    I’ll continue to use the shorthand of “knowing your computer all the way to the bottom” as meaning understanding to the C/ASM level, but I’ve definitely become more cognizant of the problems with C and focusing too much on optimization. I love optimization and low-level coding, but most problems will suffer from the extra complexity of writing the whole system in C/C++. Code maintainability and simplicity are more important.

                                                                                    [1] Converting almost any old system into a modern language would massively reduce the total SLOC and simplify it considerably. The problems we have now are largely the fault of us cranking up the overall complexity. Some of the complexity is incidental, but most is accidental.

                                                                                    [2] As a bonus, you look like a total wizard to everybody else too :)

                                                                                    1. 13

                                                                                      The other value of learning low-level programming is that it does come in handy 1% of the time when the abstraction leaks and something breaks at a lower level. In such a situation, rather than being petrified and reverting to jiggling the system and hoping it’ll start working again, you roll up your sleeves, crack out gdb and get to work debugging the segfault that’s happening in the JVM because somebody probably missed an edge case in the FFI call to an external library. It’s handy to be able to do this[2].

                                                                                      I guess my perspective on this is a bit warped from leading SRE and Performance Engineering type teams for so long. This 1% is our 80%. So it’s often about looking through the abstractions and understanding the underneath of how something is failing or inefficient. In today’s cloud world, this can directly translates into real dollars that dynamically fluctuate depending on the efficiency and usage of the software we’re running.

                                                                                      It seems like most of the perspectives here, and in the main article are in the context of writing application code, or business logic in situations that are not performance critical.

                                                                                      1. 2

                                                                                        Yeah, I generally consider stuff like ops and SRE to be at the OS/kernel level anyway. I’d guess you generally are less concerned about business logic, and more concerned with common request behaviors and how they impact performance. But I think (to the original author’s point), that understanding how filesystems perform or how the networking stack operates in different conditions is essential for this type of work anyway. SRE’s are actually a group that would have a real reason for experimenting with different process scheduling algorithms! :P

                                                                                        Digging lower-level for an SRE would probably include doing things like learning how the kernel driver or firmware for an SSD runs or even how the hardware works, which probably has less of a return in value than getting a broader understanding of different kernel facilities.

                                                                                      2. 3

                                                                                        how few lines of code you need to build highly interactive and useful applications

                                                                                        Sure, if you ignore necessary complexities like internationalization and accessibility. Remember Text Editing Hates You Too from last week? The days when an acceptable text input routine could fit alongside a BASIC interpreter in a 12K ROM (a reference to my own Apple II roots) are long gone. The same applies to other UI components.

                                                                                        1. 2

                                                                                          s/accidental/essential

                                                                                          1. 0

                                                                                            C taught me how little library support you actually need to build usable software. To make an extreme comparison, this is totally at odds with how most JavaScript developers work. Most of my C projects required nothing but the stdlib,

                                                                                            Most people who make this claim about JavaScript don’t appreciate that there is no stdlib. It’s just the language - very basic until recently, and still pretty basic - which amounts to a couple of data structures and a hodge podge of helper functions. The closest thing there has been to a stdlib is probably the third party library lodash. Even that’s just some extra helper functions. People didn’t spend countless hours implementing or using a bunch of libraries out of ignorance, they did it because there was no stdlib!

                                                                                            1. 6

                                                                                              Most people who make this claim about JavaScript don’t appreciate that there is no stdlib

                                                                                              Um. Depending on the minimum supported browser, JavaScript has at least string manipulation methods (including regex, splitting, joining, etc), garbage collection, Hash tables, sets, promises, BigNums, UTF8, exception handling facilities, generators, prototypical OO functions, and DOM manipulation functions. Every web browser supports drawing things with (at least one of) the canvas API, SVG, with CSS styling. You get an entire UI toolkit for free.

                                                                                              C has precisely zero of those. You want hash tables? You have to implement your own hashing function and built out a hash table data structure from there. How about an object-oriented system? You’ll have to define a convention and implement the whole system from scratch (including things like class hierarchy management and vtable indirection if you want polymorphic behavior).

                                                                                              In JavaScript, a “string” is a class with built-in methods containing a bounded array of characters. In C, a “string” is a series of contingious bytes terminated by a zero. There’s no concept of bounded arrays in C. Everything is pointers and pointer offsets. You want arrays with bounds checks? Gotta implement those. Linked lists? Build ’em from the recursive definition.

                                                                                              Literally the most string manipulation-ish behavior I can think of off the top of my head is the strtok function defined in string.h. It performs string tokenization by looping through a string until it hits a space then moves a pointer to the beginning of the word and inserts a null terminator at the end of the word, keeping track of what character was there before in global memory. It does this to avoid an allocation since memory management is done manually in C. Clearly it’s not threadsafe.

                                                                                              That’s about the highest-level thing string.h can do. It also implements things like, oh, memcpy which is literally a function that moves raw bytes of memory around.

                                                                                              Maybe JavaScript’s stdlib isn’t as extensive as, say Python’s, but it exists, and it is not small (and to my original point, is far nicer for getting real work done more quickly than what was possible in the past). But every external library that’s included in a JS application gets sent to every client every time they request the page[1]. I’m not saying that external libraries should never be used with JS, but there’s a multiplicative cost with JS that ought to encourage more compact code than what is found on most websites.

                                                                                              [1] Yes, sans caching, but centralizing every JS file to a CDN has its own set of issues.

                                                                                              1. 3

                                                                                                What is a stdlib if not a library of datatypes and functions that comes standard with an implementation of a language?

                                                                                                1. 2

                                                                                                  What do you call String and RegExp and Math, etc in Javascript? These are defined by the language spec but basically comparable to C’s stdlib.

                                                                                                  And of course, in the most likely place to use Javascript, in the browser, you have a pretty extensive “DOM” api too (it does more than just dom though!).

                                                                                              1. 3

                                                                                                This is so true. As a rule, skills don’t transfer. I am more of a compiler geek than an OS geek, but I openly say to anybody who would listen: learn the compiler if and only if you want to learn the compiler. Do not expect learning the compiler to improve your skills in any other kinds of programming. It is not privileged.

                                                                                                1. 11

                                                                                                  Slight counter point. I’ve watched the best software engineer on my team explain to others what the Go compiler is doing with a particular piece of code that makes it behave and perform in a certain way. This kind of knowledge and explanation led to a better implementation for us. I’m not sure he’d be able to offer the solutions he does without some knowledge of what the compiler is doing. This is in the context of code in a messaging pipeline that has to be highly reliable and highly efficient. Inefficient implementations can make it so our product is just not economically feasible.

                                                                                                  1. 2

                                                                                                    When I see a piece of code, I often try to reason about how it has to be implemented in the compiler and what it has to do at runtime to understand its properties. Going into the compiler is useful this way. For example, if I want to know how big I can expect a Java class with a super class to be, I know that it can’t do cross-class-hierarchy layout optimization. I know this because having the layout change based on the subclass would make virtual calls tricky to implement.

                                                                                                  2. 3

                                                                                                    Good example. My main takeaways from hacking compilers were parsing and pipelining. Parsing should still be automated where possible with parser generators. That said, seeing the effect clean vs messy grammars had was enlightening in a way that had me eyeballing that in future decisions about what data formats to use. LANGSEC showed up later doing an even better job of illustrating that with Chomsky’s model. Plus, showing showing how damaging bad grammars can be.

                                                                                                    Pipelining was abstract concept that showed up again in parallization, UNIX pipes, and services. It was useful. Don’t need to study compilers to learn it, though. Like in OP, there’s already a Wikipedia article to give them instead.

                                                                                                    1. 3

                                                                                                      Grammars and parsing are orthogonal to compilers. Everyone who ever deals with file formats must learn about that because people who don’t know tend to produce horrible grammars and even worse parsers that only work correctly for a tiny subset of cases.

                                                                                                      Still, one can learn it without ever hearing about the things compilers do with ASTs.

                                                                                                      1. 1

                                                                                                        I agree. Compiler books and articles were just my only exposure to them at the time. Modern Web having articles on about everything means it’s easier than ever to narrow things down to just the right topic.

                                                                                                        1. 2

                                                                                                          Yeah, and then a number of compiler books don’t really discuss the implications of grammar design either, but get to subjects irrelevant for most readers right away. Like, nobody is really going to write their own code generator anymore now that all mainstream compilers are modules (and if they have a good reason to, they are not or should not be reading introductory books).

                                                                                                    2. 3

                                                                                                      There is one skillset I think that learning a compiler would help you with: How to approach a large and foreign codebase. You don’t have to learn a compiler to practice it, but I think both learning how to approach unknown codebases is a cross-cutting skill.

                                                                                                      Of course, you can also learn that skill by reading the source to your web/GUI/Game/App framework of choice, assuming it’s available. I do think the general idea of building something that you usually only ever consume is a good notion.

                                                                                                      1. 15

                                                                                                        Apple won’t ship anything that’s licensed under GPL v3 on OS X. Now, why is that?

                                                                                                        There are two big changes in GPL v3. The first is that it explicitly prohibits patent lawsuits against people for actually using the GPL-licensed software you ship. The second is that it carefully prevents TiVoization, locking down hardware so that people can’t actually run the software they want.

                                                                                                        So, which of those things are they planning for OS X, eh?

                                                                                                        Copyright lawyers from multiple organizations that I’ve spoken to simply aren’t too happy with the GPLv3 because to them it lacks clarity. It took quite a while for GPLv2 to be acceptable in any place where lawyers have a veto because of its unusual construction, and GPLv3 added more of that, in language that doesn’t make it easy to interpret (apparently, I’m not a lawyer).

                                                                                                        1. 6

                                                                                                          I work at a large company and the guidelines from above are that we should avoid GPL licensed code at all cost. If we cannot avoid it, we need to get permission and isolate it as well as possible from the rest of the source code. This is done not because we want to sue our customers or begin with TiVoization, but simply to guard ourselves against lawsuits and being forced to release sensitive parts of our code.

                                                                                                          1. 4

                                                                                                            That’s the generic “careful with GPL” policy. There are companies that are fine with GPLv2 specifically (for the most part) but aren’t fine with GPLv3 because they consider its potential consequences less clear.

                                                                                                            1. 3

                                                                                                              Which is why I now use AGPLv3 for everything I personally write. Fuck people taking and taking and not giving anything back. I feel like we’ve lost our open source way. I referenced this very article a few years back when I wrote this:

                                                                                                              https://battlepenguin.com/tech/the-philosophy-of-open-source-in-community-and-enterprise-software/

                                                                                                              1. 1

                                                                                                                This is counterintuitive.

                                                                                                                Less people willing/able to even consider using your software instantly means less potential for submissions to fix bugs or add features.

                                                                                                                1. 1

                                                                                                                  It depends on your priorities. Do you want more users or do you want your software to be free?

                                                                                                                  1. 1

                                                                                                                    You seem to want more contributions, which is why I commented.

                                                                                                              2. 3

                                                                                                                The company I work for has the same policy.

                                                                                                                1. 3

                                                                                                                  Yep. Policies like your employer’s are the main reason that I carefully choose licenses these days. I want to exclude as many corporations as possible from using the code without disqualifying it from being Free Software. I think WTFPL is the best widely-used license for this purpose; does your employer’s policy allow WTFPL?

                                                                                                                  1. 2

                                                                                                                    One of my employers explicitly put WTFPL on the backlist. Apparently it’s important to have the warranty disclaimer somewhere which it lacks. Consider the ISC-L (https://opensource.org/licenses/isc) instead, which is short and to the point, yet ticks all the boxes that seem to be important to lawyers.

                                                                                                                    1. 1

                                                                                                                      The ISC license is a fine license indeed, but if you re-read my original comment, I am looking for licenses which are not employer-friendly. Indeed, I had considered the ISC license, but found that too many corporations would be willing to use ISC-licensed code.

                                                                                                                      1. 1

                                                                                                                        Ah, right. I misread, I’m sorry.

                                                                                                                        Yes, WTFPL is corporate kryptonite (but still theoretically compatible, unlike the CC-*-NC variants that are explicitly non-corporate, but therefore non-free software compatible, too), so I guess it’s a fine choice for that.

                                                                                                                2. 11

                                                                                                                  It feels to me like the FSF overplayed their hand with GPLv3, and it’s led to more aggressive efforts away from the GPL.

                                                                                                                  1. 2

                                                                                                                    Are there any articles from lawyers about what form this lack of clarity takes?

                                                                                                                    Or is this just the old concern about linking and the GPLv3 has provided a convenient FUD checkpoint?

                                                                                                                    1. 1

                                                                                                                      I talked to people (several years ago, so a bit hazy on the details, too), so I don’t have anything to read up on. Generally speaking these lawyers are friendly towards open source and copyleft, so I doubt it was just a FUD checkpoint for them.

                                                                                                                      The best I found (but I’m not sure it matches the points that I heard) is Allison Randal’s take on the GPLv3 from 12 years ago: http://radar.oreilly.com/2007/05/gplv3-clarity-and-simplicity.html. That one focuses more on the “laypersons reading a license” aspect that shouldn’t worry copyright lawyers too much.

                                                                                                                  1. 2

                                                                                                                    I still remember the firs time I was exposed to redis and how surprising it was that it could only use one CPU. I think we had 4 core servers at the time and I had to remind everyone that 25% cpu usage was maxed out.

                                                                                                                    But still, redis on machines of that era (2011) easily saturated our 1gbps nics.

                                                                                                                    Today on AWS I see elasticache instances saturate their 10gbps interfaces. But it does seem like such a waste that you can provision up to a 96 vCPU instance of elasticache redis and leave so much CPU unusable.