1. 52

    Over the past few years of my career, I was responsible for over $20M/year in physical infra spend. Colocation, network backbone, etc. And then 2 companies that were 100% cloud with over $20M/year in spend.

    When I was doing the physical infra, my team was managing roughly 75 racks of servers in 4 US datacenters, 2 on each cost, and an N+2 network backbone connecting them together. That roughly $20M/year counts both OpEx and CapEx, but not engineering costs. I haven’t done this in about 3 years, but for 6+ years in a row, I’d model out the physical infra costs vs AWS prices, at 3 year reserved pricing. Our infra always came out about 40% cheaper than buying from AWS for as apples to apples as I could get. Now I would model this with savings plan, and probably bake in some of what I know about the discounts you can get when you’re willing to sign a multi-year commit.

    That said, cost is not the only factor. Now bear in mind, my perspective is not 1 server, or 1 instance. It’s single-digit thousands. But here are a few tradeoffs to consider:

    1. Do you have the staff / skillset to manage physical datacenters and a network? In my experience you don’t need a huge team to be successful at this. I think I could do the above $20M/year, 75 rack scale, with 4-8 of the right people. Maybe even less. But you do have to be able to hire and retain those people. We also ended up having 1-2 people who did nothing but vendor management and logistics.

    2. Is your workload predictable? This is a key consideration. If you have a steady or highly predictable workload, owning your own equipment is almost always more cost-effective, even when considering that 4-8 person team you need to operate it at the scale I’ve done it at. But if you need new servers in a hurry, well, you basically can’t get them. It takes 6-8 weeks to get a rack built and then you have to have it shipped, installed, bolted down etc. All this takes scheduling and logistics. So you have to do substantial planning. That said, these days I also regularly run into issues where the big 3 cloud providers don’t have the gear either, and we have to work directly with them for capacity planning. So this problem doesn’t go away completely, once your scale is substantial enough it gets worse again, even with Cloud.

    If your workload is NOT predictable, or you have crazy fast growth. Deploying mostly or all cloud can make huge sense. Your tradeoff is you pay more, but you get a lot of agility for the privilege.

    1. Network costs are absolutely egregious on the cloud. Especially AWS. I’m not talking about a 2x, or 10x, markup. By my last estimate, AWS marks up their egress costs by roughly 200-300x their costs! This is based on my estimates of what it would take to buy the network transit and routers/switches you’d need to egress a handful of Gbps. I’m sure this is an intentional lockin strategy on their part. That said, I have heard rumors of quite deep discounts on the network if you spend enough $$$. We’re talking 3 digits million multi-year commits to get the really good discounts.

    2. My final point, and a major downside of cloud deployments, combined with a Service Ownership / DevOps model, is you can see your cloud costs grow to insane levels due to simple waste. Many engineering teams just don’t think about the costs. The Cloud makes lots of things seem “free” from a friction standpoint. So it’s very very easy to have a ton of resources running, racking up the bill. And then a lot of work to claw that back. You either need a set of gatekeepers, which I don’t love, because that ends up looking like an Ops team. Or you have to build a team to build cost visibility and attribution.

    On the physical infra side, people are forced to plan, forced to come ask for servers. And when the next set of racks aren’t arriving for 6 weeks, they have to get creative and find ways to squeeze more performance out of their existing applications. This can lead to more efficient use of infra. In the cloud world, just turn up more instances, and move on. The bill doesn’t come until next month.

    Lots of other thoughts in this area, but this got long already.

    As an aside, for my personal projects, I mostly do OVH dedicated servers. Cheap and they work well. Though their management console leaves much to be desired.

    1. 2

      The thing about no code of conduct being a benefit seems to come up somewhat regularly. I even see it show up on the OpenBSD lists. But this is really just a function of community size. A small enough community can be self governing with implicit social norms.

      But once it gets large enough, the possibility rises that you’ll have too many bad actors, so you need to start making the norms explicit. This is why you see a code of conduct in FreeBSD, the community is larger.

      1. 7

        Code of Conducts aren’t exhaustive lists of what is allowed, and not even exhaustive lists of what is not allowed. They provide a bunch of guidelines but in the end, they need to be filled with life through enforcement action that usually goes beyond the scope of what is written down (and that’s where the bickering about CoCs starts: is any given activity part of one of the forbidden actions or not?) - which makes the actual social norms implicit again.

        The main signal a CoC provides is that the community is willing to enforce some kind of standard, which is a useful signal. There are communities that explicitly avoid any kind of enforcement, and there are communities that demonstrate that willingness through means other than CoCs.

        1. 5

          I don’t automatically assume that a community without a CoC is not willing to enforce a minimal standard of decency. If I were to insult a maintainer, a co-contributor or bug-reporter, I wouldn’t be surprised to experience repercussions. Do others assume that because there’s no formal document, that you can just say whatever you want?

          Either way, it’s off-topic.

          1. 4

            Not being exhaustive is actually what is great about Code of Conducts. One of the interesting things about moderating online communities is that the more specific and defined your rules for participation are, the more room bad actors have to argue with you and cause trouble.

            If the rules for your website are extremely specific, bad actors will try to poke holes in that logic, find loopholes, and generally argue the details of the rules. However, if your rule for participation is simply “don’t be an asshole”, then you have a lot more room as a moderator to deal with bad actors without getting into the weeds about the specifics.

            The Tildes Code of Conduct is really great for moderating an online community, because it’s simple and vague enough for almost everyone to understand, but does not leave any footing for bad actors to try to argue that they didn’t technically break the rules.

            I think Code of Conducts are great, and honestly, most of the people I encounter who are against them tend to be… not pleasant to collaborate with.

            Regarding bickering about forbidden actions:

            Shut it down. If you are a moderator or maintainer and someone breaks the rules, ban them. If someone causes a stink about it, warn them, and then ban them too if necessary.

            I think online communities, especially large online communities, seem to be afflicted with this idea that people on the Internet have a right to be heard and to participate. That isn’t true. Operators of these communities are not and should not be beholden to anyone. If someone continuously makes the experience worse for others and refuses to do better, ban them and be done with it.

            1. 2

              From a POSIWID perspective, the things I have observed lead me to conclude that the purpose of CoCs (in business, opensource, and other community organisations) is to install additional levers that may only be operated by politically-powerful people, and provide little-to-no protection for the people they claim to protect. I have seen people booted from projects despite admission by the admins that no CoC violation occurred, and I have seen people close ranks around politically-powerful people who remain protected despite violating organisation/project/event CoCs.

            2. 3

              You seem to be equating a code of conduct with a willingness to ban bad actors. I think that’s a false equivalence.

              1. 2

                That was not my point. My point was that the need for a code of conduct is often due to community size. Smaller communities can be more self policing based on implicit norms. They certainly can and do ban or drive off bad actors

            1. 20

              TL;DR didn’t sanitize usernames which could contain “-“ making them parse as options to the authentication program. Exploiting this, username “-schallenge:passwd” allowed silent auth bypass because the passwd backend doesn’t require a challenge.

              Awesome find, great turnaround from Theo.

              1. 2


                It’s a modern marvel that people end up using web frameworks with automatic user data parsing and escaping for their websites, because if not so many places would have these kind of “game over” scenarios.

                1. 5

                  Usernames in web applications are not easy, nor is there wide awareness of the problems or deployment of solutions.

                  If you’re interested in learning more, I’ve gone on about this at some length.

                2. 1

                  If memory serves right, there was an old login bug (circa ’99) that was the same sort of thing:


                  Edit: https://lobste.rs/s/bufolq/authentication_vulnerabilities#c_jt9ckw

                  Too slow I guess :)

                  1. 1

                    Is this specific to OpenWall users or is it applicable to OpenBSD in general?
                    From title it looks like an authentication vulnerability in the OpenBSD core os.

                    1. 1

                      This is OpenBSD in general.

                    1. 6

                      I really liked this. Particularly the points about maintenance being just as important as building something new. It’s nice to see their philosophy articulated and how the Neovim team has put it in action. I loved the call out about fixing an issue being a O(1) cost while they impact is O(N*M) over all the users it reaches.

                      Very much looking forward to seeing their roadmap realized.

                      1. 1

                        I’ve been using this for the last several months. Switched over from a homeshick setup. I’m liking it now.

                        My repo with a README showing my simple workflow: https://github.com/kelp/dotfiles

                        1. 23

                          I think Josh addresses a good point here: systemd provides features that distributions want, but other init systems are actively calling non-features. That’s a classic culture clash, and it shows in the systemd debates - people hate it or love it (FWIW, I love it). I’m also highly sympathetic to systemd’s approach of shipping a software suite.

                          Still, it’s always important to have a way out of a component. But the problem here seems to be that the scope of an init system is ill-defined and there’s fundamentally different ideas where the Linux world should move. systemd moves away from the “kernel with rather free userspace on top” model, others don’t agree.

                          1. 17

                            Since systemd is Linux-only, no one who wants to be portable to, say, BSD (which I think includes a lot of people) can depend on its features anyway.

                            1. 12

                              Which is why I wrote “Linux world” and not “Unix world”.

                              systemd has a vision for Linux only and I’m okay with that. It’s culture clashing, I agree.

                              1. 6

                                What I find so confusing - and please know this comes from a “BSD guy” and a place of admitted ignorance - is that it seems obvious the natural conclusion of these greater processes must be that “Linux” is eventually something closer to a complete operating system (not a bazaar of GNU/Linux distributions). This seems to be explicitly the point.

                                Not only am I making no value judgement on that outcome, but I already live in that world of coherent design and personally prefer it. I just find it baffling to watch distributions marching themselves towards it.

                                1. 6

                                  But it does create a monoculture. What if you want to run service x on BSD or Redox or Haiku. A lot of Linux tools can be compiled on those operating systems with a little work, sometimes for free. If we start seeing hard dependencies on systemd, you’re also hurting new-OS development. Your service wont’ be able to run in an Alpine docker container either, or on distributions like Void Linux, or default Gentoo (although Gentoo does have a systemd option; it too is in the mess of supporting both init systems).

                                  1. 7

                                    We’ve had wildly divergent Unix and Unix-like systems for years. Haiku and Mac OS have no native X11. BSDs and System V have different init systems, OpenBSD has extended libc for security reasons. Many System V based OSes (looking at you, AIX) take POSIX to malicious compliance levels. What do you think ./configure is supposed to do if not but cope with this reality?

                                2. 2

                                  Has anyone considered or proposed something like systemd’s feature set but portable to more than just linux? Are BSD distros content with SysV-style init?

                                  1. 11

                                    A couple of pedantic nits. BSDs aren’t distros. They are each district operating systems that share a common lineage. Some code and ideas are shared back and forth, but the big 3, FreeBSD, NetBSD and OpenBSD diverged in the 90s. 1BSD was released in 1978. FreeBSD and NetBSD forked from 386BSD in 1993. OpenBSD from NetBSD in 1995. So that’s about 15 years, give or take, of BSD before the modern BSDs forked.

                                    Since then there has been 26 years of separate evolution.

                                    The BSDs also use BSD init, so it’s different from SysV-style. There is a brief overview here: https://en.m.wikipedia.org/wiki/Init#Research_Unix-style/BSD-style

                                    1. 2

                                      I think the answer to that is yes and no. Maybe the closets would be (open) solaris smf. Or maybe GNU Shepherd or runit/daemontools.

                                      But IMNHO there are no good arguments for the sprawl/feature creep of systemd - and people haven’t tried to copy it, because it’s flawed.

                                  2. 6

                                    It’s true that systemd is comparatively featureful, and I’ll extend your notion of shipping a software suite by justifying some of its expansion into other aspects of system management in terms of it unifying a number of different concerns that are pretty coupled in practice.

                                    But, and because of how this topic often goes, I feel compelled to provide the disclaimer that I mostly find systemd just fine to use on a daily basis: as I see it, the problem, though, isn’t that it moves away from the “free userspace” model, but that its expansion into other areas seems governed more by political than by technical concerns, and with that comes the problem that there’s an incentive to add extra friction to having a way out. I understand that there’s a lot of spurious enmity directed at Poettering, but I think the blatant contempt he’s shown towards maintaining conventions when there’s no cost in doing so or even just sneering at simple bug reports is good evidence that there’s a sort of embattled conqueror’s mindset underlying the project at its highest levels. systemd the software is mostly fine, but the ideological trajectory guiding it really worries me.

                                    1. 1

                                      I’m also highly sympathetic to systemd’s approach of shipping a software suite.

                                      What do you mean here? Bulling all distro maintainers until they are forced to setup your software as default, up to the point of provoking the suicide of people who don’t want to? That’s quite a heavy sarcasm you are using here.

                                      1. 12

                                        up to the point of provoking the suicide of people who don’t want to


                                        1. 25

                                          How was anyone bullied into running systemd? For Arch Linux this meant we no longer had to maintain initscripts anymore and could rely on systemd service files which are a lot nicer. In the end it saved us work and that’s exactly what systemd tries to be a toolkit for initscripts and related system critical services and now also unifying Linux distro’s.

                                          1. 0

                                            huh? Red Hat and Poettering strongarmed distribution after distribution and stuffed the debian developer ballots. This is all a matter of the public record.

                                            1. 10

                                              stuffed the debian developer ballots

                                              Link? This is the first time I am hearing about it.

                                              1. 5

                                                I’m also confused, I followed the Debian process, and found it very through and good. The documents coming out of it are still a great reference.

                                          2. 2

                                            I interpreted it as having fewer edges where you don’t have control. Similar situations happen with omnibus packages that ship all dependencies and the idea of Docker/containers. It makes it more monolithic, but easier to not have to integrate with every logging system or mail system.

                                            If your philosophy of Linux is Legos, you probably feel limited by this. If you philosophy is platform, then this probably frees you. If the constraints are doable, they often prevent subtle mistakes.

                                            1. 2

                                              I don’t think skade intended to be sarcastic or combative. I personally have some gripes with systemd, but I’m curious about that quote as well.

                                              I read the quote as being sympathetic towards a more unified init system. Linux sometimes suffers from having too many options (a reason I like BSD). But I’m not sure if that was the point being made

                                              Edit: grammar

                                              1. 5

                                                I value pieces that are intended to work well together and come from the same team, even if they are separate parts. systemd provides that. systemd has a vision and is also very active in making it happen. I highly respect that.

                                                I also have gripes with systemd, but in general like to use it. But as long as no other project with an attitude to move the world away from systemd by being better and also by being better at convincing people, I’ll stick with it.

                                          1. 2

                                            I love this question.

                                            I iterated on this for quite a while but over the last several years I’ve settled on this.



                                            • I hardly have a system.
                                            • I write TODO, the date and a square box for a TODO list
                                            • I write a descriptive heading, and use some bullet points for writing down some ideas.

                                            The leather notebook has quite the patina now from carry it around for the last few years. And I’m a big fan of the Doan paper grid lines notepads. Higher quality paper than fieldnotes, but the same form factor. And I enjoy the grid lines pattern.

                                            I also keep some of the Doan paper writing pads on my desk at work and home for random disposable notes, like a daily TODO, and working out ideas.

                                            1. 19

                                              I’ve started to appreciate this perspective recently. It’s easy to get carried away with always digging deeper to learn the next lowest level. This inevitably leads to finding new, ugly problems with each new level. If you’re anything like me, you’ll constantly feel the urge to rewrite the whole world. It’s not good enough until I develop my own CPU/OS/programming language/UI toolkit/computer environment thing. This is also the problem with learning things like Lisp and Haskell (which are “low-level” in a theoretical sense).

                                              At some point, you have to accept that everything is flawed, and if you’re going to build something that real people will use, you have to pick a stack, accept the problems, and start coding. Perfect is the enemy of good after all.

                                              But there is still value in learning low-level languages, and the author may have gone too far in his criticisms. In high school, I learned C and decided the whole world needed to be rewritten in C. I wrote parts of games, kernels, compilers, and interpreters, and learned a lot. My projects could have been more focused. I could have chosen more “pragmatic” languages, and maybe built software that actually got used by a real end-user. Still, there were a few lessons I learned.

                                              First, C taught me how little library support you actually need to build usable software. To make an extreme comparison, this is totally at odds with how most JavaScript developers work. Most of my C projects required nothing but the stdlib, and maybe a library for drawing stuff to the screen. Sure, this meant I ended up writing a lot of utility functions myself, but it can be pretty freeing to realize how few lines of code you need to build highly interactive and useful applications. C tends to dissuade crazy abstractions (and thus external libraries) because of the limits and lack of safety in the language. This forces you to write everything yourself and to understand more of the system than you would have had you used an external library.

                                              The corollary is recognizing how difficult things were in “the old days” when components were less composable and unsafe languages abounded. We have it good. Sure, software still sucks, and things are worse now in certain ways because of multithreading and distributed systems, but at least writing code that does crazy string manipulation is easy now[1]

                                              The other value of learning low-level programming is that it does come in handy 1% of the time when the abstraction leaks and something breaks at a lower level. In such a situation, rather than being petrified and reverting to jiggling the system and hoping it’ll start working again, you roll up your sleeves, crack out gdb and get to work debugging the segfault that’s happening in the JVM because somebody probably missed an edge case in the FFI call to an external library. It’s handy to be able to do this[2].

                                              I’ll continue to use the shorthand of “knowing your computer all the way to the bottom” as meaning understanding to the C/ASM level, but I’ve definitely become more cognizant of the problems with C and focusing too much on optimization. I love optimization and low-level coding, but most problems will suffer from the extra complexity of writing the whole system in C/C++. Code maintainability and simplicity are more important.

                                              [1] Converting almost any old system into a modern language would massively reduce the total SLOC and simplify it considerably. The problems we have now are largely the fault of us cranking up the overall complexity. Some of the complexity is incidental, but most is accidental.

                                              [2] As a bonus, you look like a total wizard to everybody else too :)

                                              1. 13

                                                The other value of learning low-level programming is that it does come in handy 1% of the time when the abstraction leaks and something breaks at a lower level. In such a situation, rather than being petrified and reverting to jiggling the system and hoping it’ll start working again, you roll up your sleeves, crack out gdb and get to work debugging the segfault that’s happening in the JVM because somebody probably missed an edge case in the FFI call to an external library. It’s handy to be able to do this[2].

                                                I guess my perspective on this is a bit warped from leading SRE and Performance Engineering type teams for so long. This 1% is our 80%. So it’s often about looking through the abstractions and understanding the underneath of how something is failing or inefficient. In today’s cloud world, this can directly translates into real dollars that dynamically fluctuate depending on the efficiency and usage of the software we’re running.

                                                It seems like most of the perspectives here, and in the main article are in the context of writing application code, or business logic in situations that are not performance critical.

                                                1. 2

                                                  Yeah, I generally consider stuff like ops and SRE to be at the OS/kernel level anyway. I’d guess you generally are less concerned about business logic, and more concerned with common request behaviors and how they impact performance. But I think (to the original author’s point), that understanding how filesystems perform or how the networking stack operates in different conditions is essential for this type of work anyway. SRE’s are actually a group that would have a real reason for experimenting with different process scheduling algorithms! :P

                                                  Digging lower-level for an SRE would probably include doing things like learning how the kernel driver or firmware for an SSD runs or even how the hardware works, which probably has less of a return in value than getting a broader understanding of different kernel facilities.

                                                2. 3

                                                  how few lines of code you need to build highly interactive and useful applications

                                                  Sure, if you ignore necessary complexities like internationalization and accessibility. Remember Text Editing Hates You Too from last week? The days when an acceptable text input routine could fit alongside a BASIC interpreter in a 12K ROM (a reference to my own Apple II roots) are long gone. The same applies to other UI components.

                                                  1. 2


                                                    1. 0

                                                      C taught me how little library support you actually need to build usable software. To make an extreme comparison, this is totally at odds with how most JavaScript developers work. Most of my C projects required nothing but the stdlib,

                                                      Most people who make this claim about JavaScript don’t appreciate that there is no stdlib. It’s just the language - very basic until recently, and still pretty basic - which amounts to a couple of data structures and a hodge podge of helper functions. The closest thing there has been to a stdlib is probably the third party library lodash. Even that’s just some extra helper functions. People didn’t spend countless hours implementing or using a bunch of libraries out of ignorance, they did it because there was no stdlib!

                                                      1. 6

                                                        Most people who make this claim about JavaScript don’t appreciate that there is no stdlib

                                                        Um. Depending on the minimum supported browser, JavaScript has at least string manipulation methods (including regex, splitting, joining, etc), garbage collection, Hash tables, sets, promises, BigNums, UTF8, exception handling facilities, generators, prototypical OO functions, and DOM manipulation functions. Every web browser supports drawing things with (at least one of) the canvas API, SVG, with CSS styling. You get an entire UI toolkit for free.

                                                        C has precisely zero of those. You want hash tables? You have to implement your own hashing function and built out a hash table data structure from there. How about an object-oriented system? You’ll have to define a convention and implement the whole system from scratch (including things like class hierarchy management and vtable indirection if you want polymorphic behavior).

                                                        In JavaScript, a “string” is a class with built-in methods containing a bounded array of characters. In C, a “string” is a series of contingious bytes terminated by a zero. There’s no concept of bounded arrays in C. Everything is pointers and pointer offsets. You want arrays with bounds checks? Gotta implement those. Linked lists? Build ’em from the recursive definition.

                                                        Literally the most string manipulation-ish behavior I can think of off the top of my head is the strtok function defined in string.h. It performs string tokenization by looping through a string until it hits a space then moves a pointer to the beginning of the word and inserts a null terminator at the end of the word, keeping track of what character was there before in global memory. It does this to avoid an allocation since memory management is done manually in C. Clearly it’s not threadsafe.

                                                        That’s about the highest-level thing string.h can do. It also implements things like, oh, memcpy which is literally a function that moves raw bytes of memory around.

                                                        Maybe JavaScript’s stdlib isn’t as extensive as, say Python’s, but it exists, and it is not small (and to my original point, is far nicer for getting real work done more quickly than what was possible in the past). But every external library that’s included in a JS application gets sent to every client every time they request the page[1]. I’m not saying that external libraries should never be used with JS, but there’s a multiplicative cost with JS that ought to encourage more compact code than what is found on most websites.

                                                        [1] Yes, sans caching, but centralizing every JS file to a CDN has its own set of issues.

                                                        1. 3

                                                          What is a stdlib if not a library of datatypes and functions that comes standard with an implementation of a language?

                                                          1. 2

                                                            What do you call String and RegExp and Math, etc in Javascript? These are defined by the language spec but basically comparable to C’s stdlib.

                                                            And of course, in the most likely place to use Javascript, in the browser, you have a pretty extensive “DOM” api too (it does more than just dom though!).

                                                        1. 3

                                                          This is so true. As a rule, skills don’t transfer. I am more of a compiler geek than an OS geek, but I openly say to anybody who would listen: learn the compiler if and only if you want to learn the compiler. Do not expect learning the compiler to improve your skills in any other kinds of programming. It is not privileged.

                                                          1. 11

                                                            Slight counter point. I’ve watched the best software engineer on my team explain to others what the Go compiler is doing with a particular piece of code that makes it behave and perform in a certain way. This kind of knowledge and explanation led to a better implementation for us. I’m not sure he’d be able to offer the solutions he does without some knowledge of what the compiler is doing. This is in the context of code in a messaging pipeline that has to be highly reliable and highly efficient. Inefficient implementations can make it so our product is just not economically feasible.

                                                            1. 2

                                                              When I see a piece of code, I often try to reason about how it has to be implemented in the compiler and what it has to do at runtime to understand its properties. Going into the compiler is useful this way. For example, if I want to know how big I can expect a Java class with a super class to be, I know that it can’t do cross-class-hierarchy layout optimization. I know this because having the layout change based on the subclass would make virtual calls tricky to implement.

                                                            2. 3

                                                              There is one skillset I think that learning a compiler would help you with: How to approach a large and foreign codebase. You don’t have to learn a compiler to practice it, but I think both learning how to approach unknown codebases is a cross-cutting skill.

                                                              Of course, you can also learn that skill by reading the source to your web/GUI/Game/App framework of choice, assuming it’s available. I do think the general idea of building something that you usually only ever consume is a good notion.

                                                              1. 3

                                                                Good example. My main takeaways from hacking compilers were parsing and pipelining. Parsing should still be automated where possible with parser generators. That said, seeing the effect clean vs messy grammars had was enlightening in a way that had me eyeballing that in future decisions about what data formats to use. LANGSEC showed up later doing an even better job of illustrating that with Chomsky’s model. Plus, showing showing how damaging bad grammars can be.

                                                                Pipelining was abstract concept that showed up again in parallization, UNIX pipes, and services. It was useful. Don’t need to study compilers to learn it, though. Like in OP, there’s already a Wikipedia article to give them instead.

                                                                1. 3

                                                                  Grammars and parsing are orthogonal to compilers. Everyone who ever deals with file formats must learn about that because people who don’t know tend to produce horrible grammars and even worse parsers that only work correctly for a tiny subset of cases.

                                                                  Still, one can learn it without ever hearing about the things compilers do with ASTs.

                                                                  1. 1

                                                                    I agree. Compiler books and articles were just my only exposure to them at the time. Modern Web having articles on about everything means it’s easier than ever to narrow things down to just the right topic.

                                                                    1. 2

                                                                      Yeah, and then a number of compiler books don’t really discuss the implications of grammar design either, but get to subjects irrelevant for most readers right away. Like, nobody is really going to write their own code generator anymore now that all mainstream compilers are modules (and if they have a good reason to, they are not or should not be reading introductory books).

                                                                1. 15

                                                                  Apple won’t ship anything that’s licensed under GPL v3 on OS X. Now, why is that?

                                                                  There are two big changes in GPL v3. The first is that it explicitly prohibits patent lawsuits against people for actually using the GPL-licensed software you ship. The second is that it carefully prevents TiVoization, locking down hardware so that people can’t actually run the software they want.

                                                                  So, which of those things are they planning for OS X, eh?

                                                                  Copyright lawyers from multiple organizations that I’ve spoken to simply aren’t too happy with the GPLv3 because to them it lacks clarity. It took quite a while for GPLv2 to be acceptable in any place where lawyers have a veto because of its unusual construction, and GPLv3 added more of that, in language that doesn’t make it easy to interpret (apparently, I’m not a lawyer).

                                                                  1. 6

                                                                    I work at a large company and the guidelines from above are that we should avoid GPL licensed code at all cost. If we cannot avoid it, we need to get permission and isolate it as well as possible from the rest of the source code. This is done not because we want to sue our customers or begin with TiVoization, but simply to guard ourselves against lawsuits and being forced to release sensitive parts of our code.

                                                                    1. 4

                                                                      That’s the generic “careful with GPL” policy. There are companies that are fine with GPLv2 specifically (for the most part) but aren’t fine with GPLv3 because they consider its potential consequences less clear.

                                                                      1. 3

                                                                        Which is why I now use AGPLv3 for everything I personally write. Fuck people taking and taking and not giving anything back. I feel like we’ve lost our open source way. I referenced this very article a few years back when I wrote this:


                                                                        1. 1

                                                                          This is counterintuitive.

                                                                          Less people willing/able to even consider using your software instantly means less potential for submissions to fix bugs or add features.

                                                                          1. 1

                                                                            It depends on your priorities. Do you want more users or do you want your software to be free?

                                                                            1. 1

                                                                              You seem to want more contributions, which is why I commented.

                                                                        2. 3

                                                                          The company I work for has the same policy.

                                                                          1. 3

                                                                            Yep. Policies like your employer’s are the main reason that I carefully choose licenses these days. I want to exclude as many corporations as possible from using the code without disqualifying it from being Free Software. I think WTFPL is the best widely-used license for this purpose; does your employer’s policy allow WTFPL?

                                                                            1. 2

                                                                              One of my employers explicitly put WTFPL on the backlist. Apparently it’s important to have the warranty disclaimer somewhere which it lacks. Consider the ISC-L (https://opensource.org/licenses/isc) instead, which is short and to the point, yet ticks all the boxes that seem to be important to lawyers.

                                                                              1. 1

                                                                                The ISC license is a fine license indeed, but if you re-read my original comment, I am looking for licenses which are not employer-friendly. Indeed, I had considered the ISC license, but found that too many corporations would be willing to use ISC-licensed code.

                                                                                1. 1

                                                                                  Ah, right. I misread, I’m sorry.

                                                                                  Yes, WTFPL is corporate kryptonite (but still theoretically compatible, unlike the CC-*-NC variants that are explicitly non-corporate, but therefore non-free software compatible, too), so I guess it’s a fine choice for that.

                                                                          2. 11

                                                                            It feels to me like the FSF overplayed their hand with GPLv3, and it’s led to more aggressive efforts away from the GPL.

                                                                            1. 2

                                                                              Are there any articles from lawyers about what form this lack of clarity takes?

                                                                              Or is this just the old concern about linking and the GPLv3 has provided a convenient FUD checkpoint?

                                                                              1. 1

                                                                                I talked to people (several years ago, so a bit hazy on the details, too), so I don’t have anything to read up on. Generally speaking these lawyers are friendly towards open source and copyleft, so I doubt it was just a FUD checkpoint for them.

                                                                                The best I found (but I’m not sure it matches the points that I heard) is Allison Randal’s take on the GPLv3 from 12 years ago: http://radar.oreilly.com/2007/05/gplv3-clarity-and-simplicity.html. That one focuses more on the “laypersons reading a license” aspect that shouldn’t worry copyright lawyers too much.

                                                                            1. 2

                                                                              I still remember the firs time I was exposed to redis and how surprising it was that it could only use one CPU. I think we had 4 core servers at the time and I had to remind everyone that 25% cpu usage was maxed out.

                                                                              But still, redis on machines of that era (2011) easily saturated our 1gbps nics.

                                                                              Today on AWS I see elasticache instances saturate their 10gbps interfaces. But it does seem like such a waste that you can provision up to a 96 vCPU instance of elasticache redis and leave so much CPU unusable.

                                                                              1. 1

                                                                                At Work:

                                                                                • macOS
                                                                                • NeoVim
                                                                                • fish shell
                                                                                • Firefox
                                                                                • 1Password
                                                                                • Soulver

                                                                                At Home:

                                                                                • OpenBSD / Arch
                                                                                • dwm, dmenu, slstatus / i3, dmenu, i3status-rs
                                                                                • st / alacritty
                                                                                • NeoVim
                                                                                • Tmux
                                                                                • Firefox
                                                                                • 1Password X
                                                                                1. 12

                                                                                  I have a 2018 MateBook X Pro that I put Arch on and I love it. Much prefer it to the 2017 MacBook Pro 13 that I use at work. Keyboard feels great, screen is excellent. USB-C port feels a bit tight. Build quality is maybe just a little bit lower than Apple. But not having Apple’s terrible recent keyboards is so nice.

                                                                                  I wrote a detailed setup guide here: https://github.com/kelp/arch-matebook-x-pro

                                                                                  Everything works except the fingerprint reader. I did have to do some weird hackery to get sound working properly, but it’s documented above.

                                                                                  I’ve also tried getting OpenBSD running on it, but get an immediate reboot after the boot loader, as soon as the kernel loads. Not sure how to even start debugging that…

                                                                                  1. 5

                                                                                    Company: Segment

                                                                                    Company site: https://segment.com/

                                                                                    Position(s): Many things, but I’m specifically hiring Software Engineers for the SRE and Tooling teams. https://segment.com/jobs/922851/

                                                                                    Location: San Francisco and Vancouver are primary engineering offices, but open to remote, with some limits. Just ask.

                                                                                    Description: Segment provides infrastructure for first party customer data. The tooling and SRE teams write software, mostly in Go to ensure our engineering team is productive and our systems are reliable. The company is growing quickly with interesting scale challenges. Check out our engineering blog: https://segment.com/blog/categories/engineering/

                                                                                    Contact: tcole@segment.com

                                                                                    1. 2

                                                                                      I would like to see salary distributions not only split out by country but also by “type of software development”. There’s a huge difference between web dev - Django/RoR/ETL sort of stuff - and, say, embedded software development, or HPC, or something.

                                                                                      At least in the UK, the salaries appear to be massively different… but I don’t know if my experience generalizes across the whole country (e.g. is it really a London versus elsewhere thing? And there happen to be many more webby startups in London?); let alone across other countries.

                                                                                      1. 2

                                                                                        I can’t cite any real data, but I think for better or for worse, engineers writing backend services for web things are going to demand better compensation than say an embedded engineer. The margins on SaaS are much higher than hardware / embedded, and there is a ton of demand.

                                                                                        Of course I’m talking FAANG, or companies trying to recruit from FAANG, vs say embedded engineering at Cisco or Juniper.

                                                                                        Glassdoor does have data on this.

                                                                                        1. 2

                                                                                          You seem to be right: at least, that fits with my experience. It surprised me though: I know demand for webby stuff is high but I thought supply would be even higher. When I worked in hardware/embedded/HPC-type things, it was really hard to find good candidates to hire. We’d interview loads of people before we offered anyone a job. Now that I work on web shit, there seems to be a huge pool of candidates to choose from.

                                                                                      1. 5

                                                                                        I think the issue is that SV is driving a certain level of pay. But as remote work becomes more and more accepted companies outside of SV are having to compete at wages that are close to SV prices. I live in a medium sized urban American city and our tech wages used to be some of the lowest in the country. Like 40-50k a year. Due to the acceptance of remote work I was able to force a local company to pay me a wage in the mid six figures because I could point to remote wages and say I can get a remote job making x living here why would I work for you for less? So now I’m in the top 0.5% of my states wages because I used the remote market to drive a much higher wage.

                                                                                        1. 2

                                                                                          I’m a little surprised that the Silicon Valley companies are offering the same or similar (Is this what you’re implying?) wages for remote work as on-site: I would expect them to offer a lower wage on the basis that you don’t need to cover the cost of living in Silicon Valley (while still offering a little more than any company local to you).

                                                                                          1. 2

                                                                                            The wage they offer you is based on what you can convince them you are worth to someone. If you’re a senior engineer and want to work remote, get multiple offers, find a company willing to pay Bay Area compensation for remote work. The other companies then have to compete with that.

                                                                                            I know of several remote engineers who have that arrangement.

                                                                                            1. 1

                                                                                              I’ve seen a few friends get offers in the mid to upper 100k’s from SV companies. I was first offered below 100k at my current job and the pressure of those remote salaries allowed me to push for a significantly higher rate then they would have offered me otherwise.

                                                                                          1. 17

                                                                                            I’ve been a hiring manager / engineering director in San Francisco for a number of years and I’ve talked a lot with people who’s job it is to set engineering compensation.

                                                                                            This is mostly my own speculating and reasoning it out. So sorry, won’t be citing any sources.

                                                                                            Much of what you’re seeing is really the top companies, especially FAANG and those trying to compete with them for engineers. This is especially true in the Bay Area where there are a lot of engineers, but still a limited number that live there or are willing to relocate. The housing costs are a big deterrent to relocating. Everyone is hiring.

                                                                                            There is another tier of companies that doesn’t pay as well as these.

                                                                                            In those top companies, they generally have to recruit engineers from other companies, and candidates often have multiple offers. They index their compensation against each other. So it’s a bit of an arms race. If another company is willing to pay your engineers more, you need to match it or risk losing them.

                                                                                            Generally it’s a labor shortage, combined with easy Venture Capital, but also a set of businesses that have or have potential for great profit and revenue with high margins.

                                                                                            One thing we’re seeing is a push to hire remote engineers, elsewhere in the US or even outside of the US. I think this is because we’ve finally run compensation up to a level that is hard for businesses to bear. And everyone uses video conferencing and things like Slack now.

                                                                                            Generally for more senior engineers in the US, they can demand Bay Area compensation even when remote. More junior are going to take a hair cut on Bay Area compensation.

                                                                                            1. 16

                                                                                              I’m talking with a friend of mine right now who is moving from Google in SV at ~$250k USD to Google Montreal at ~$230k CAD (~$170k USD). (Total comp with stock/bonus)

                                                                                              While $250k USD in SV is great, you’re still living in an overpriced rental, you just feel mid-upper class. But $230k CAD in Montreal though is completely different. You’re basically “the 1%”, you can buy what ever place you want, you can go to the fanciest restaurants often, etc.

                                                                                              My friend is going from a rather miserable place he hates, to his home city and while he took a comparable pay cut, he’s gaining a much better QOL and Google just saved $60-70k USD on a single employee.

                                                                                              The real question ends up being where you want to end up in the long run. I was in a similar boat making ~$170k CAD ($130k USD) (base pay, no bonus/comp) working remote from Toronto, which was awesome a comparable position locally would have got me around $120k CAD. I had an amazing downtown penthouse condo, my QOL was generally great. But I wasn’t going to live in Toronto forever, my partner lives in the US, and we’ll likely end up in the US for at least the next 5-30 years. My $170k CAD salary was a huge hindrance if we were wanting to buy a place in one of our choice cities: Seattle, Boston, DC, NYC, etc. All of which are more expensive than Toronto. I’m in Seattle now and while I comparably make a lot more my QOL dropped a fair bit, I now live on the 2nd floor of a ~6 story rental building, paying the same percentage of my income towards rent as I was for my Toronto penthouse. (So really a lot more than I was paying for my Toronto place.) But if I were to return to Toronto I’d just straight up have more cash.

                                                                                              The trade offs are interesting.

                                                                                              1. 1

                                                                                                My friend is going from a rather miserable place he hates, to his home city and while he took a comparable pay cut, he’s gaining a much better QOL and Google just saved $60-70k USD on a single employee.

                                                                                                Glad to hear that your friend is choosing their own happiness over money, even if it worked out with QOL. Also glad to hear that someone aside from myself detested SV—most people there make fun of everywhere that’s not California. I’m extremely happy I don’t live there anymore.

                                                                                                I’m in Seattle now and while I comparably make a lot more my QOL dropped a fair bit

                                                                                                Interesting to hear that Seattle is more expensive than Toronto! That goes against what I assumed. I’m actually in the inverse boat—due to circumstances surrounding my partner we’ll likely end up moving from Seattle to Canada in the next few years. I love the PNW, but Vancouver seems even worse than Seattle at this point QOL-wise.

                                                                                                1. 2

                                                                                                  Also glad to hear that someone aside from myself detested SV—most people there make fun of everywhere that’s not California. I’m extremely happy I don’t live there anymore.

                                                                                                  I didn’t realize how little I liked the Bay Area until I moved to Toronto. Obviously, there’s a lot of California that I miss, but it’s really nice to not hate my working life any longer.