1. 2

    About 10 years ago, I remapped Caps Lock to Ctrl on my keyboard. I like that position so much better. Now I can’t use anyone else’s computer

    Oh yeah? I type Dvorak. Using somebody else’s computer turns me into a hunt-and-pecker, and if I ever do find a typing rhythm, I end up slipping into dvorak. When the Greek gods punish me internally, I’ll have a QWERTY keyboard on my computer, but I’ll have to give verbal instructions to someone using a Dvorak layout (but with QWERTY keycaps) to walk them through typing a series of shell commands, and even Prometheus is gonna go, “Whoa, what’d you do to piss the gods off for that?”

    1. 18

      What a curious article. Let’s start with the style, such as calling some of the (perceived) advantages of a monorepo a “lie”. Welp, guess I’m a liar 🤷‍ Good way to have a conversation, buddy. Based on this article I’d say that working at Lyft will be as much fun as working at Uber.

      Anyway, we take a deep breath and continue, and it seems that everything is just handwaved away.

      Our organisation has about 25 Go applications, supported by about 20 common dependency packages. For example, we have packages log, database, cache, etc. Rolling out updates to a dependency organisation-wide is hard, even for compatible changes. I need to update 25 apps, make PRs for 25 apps. It’s doable, but a lot of work. I expect that we’ll have 50 Go applications before the year is out.

      Monorepos exist exactly to solve problems like this. These problems are real, and can’t just be handwaved away. Yes, I can (and have) written tools to deal with this to some extent, but it’s hard to get this right, and in the end I’ve still got 25 PRs to juggle with. The author is correct that tooling for monorepos also needs to be written, but it seems to me that that tooling will be a lot simpler and easier to maintain (Go already does good caching of builds and tests out of the box, so we just have to deal with deploys). in particular, I find it’s very difficult to maintain any sense of “overview” of stuff because everything is scattered over 25 PRs.

      Note that the total size of our codebase isn’t even that large. It’s just distributed over dozens of repos.

      It’s still a difficult problem, and there is no “one size fits all” solution. If our organisation would still have just one product in Go (as we started out three years ago) then the current polyrepo approach would continue to suffice. It still worked mostly okay when we expanded to two and three products. But now that we’ve got five products (and probably more on the way in the future) it’s getting harder and harder to manage things. I can write increasingly more advanced tooling, but that’s not really something I’m looking forwards to.

      I’m not sure how to solve it yet; for us, I think the best solution will be to consolidate our 20 dependency packages in to a single one and consolidate all services of different applications in their own repo, so we’ll end up having 6 repos.

      Either way, the problems are real, and people who look towards monorepos aren’t all stupid or liars.

      1. 4

        I would imagine that if all you use is Go, and nothing much else, then I would image that you are in the monorepo “sweet spot” (especially if your repo size isn’t enormous). From what I understand, Go was more or less designed around the google internal monorepo workflow. At least until Go 1.10/1.11 or so (6 years? after Go 1.0).

        It makes me wonder…

        • Are there other languages that seem to make monorepo style repos easier?
        • Are monorepos harder/worse if you have many apps written in multiple disparate languages?
        1. 7

          Main issue with monorepos (imo) is that lots of existing tools assume you are not using them (eg: github webhooks, CI providers, VCS (support for partial worktrees), etc). Not an issue at google scale where such tools are managed (or built) in-house.

          1. 3

            This point isn’t made enough in the monorepo debate. The cost of a monorepo isn’t just the size of the checkout, it’s also all of the tooling you loose by using something non-standard. TFA mentioned some of it, but even things like git log become problematic.

            1. 2

              Is there a middleground that scopes the tooling better? What I mean is, keep your web app and related backend services in their monorepo assuming they aren’t built on drastically different platforms and you desire standardisation and alignment. Then keep your mobile apps in separate repos, unless you are using some cross-platform framework which permits a mobile monorepo. You get the benefits of the monorepo for what is possibly a growing set of services that need to refactored together while not cluttering git log et al with completely unrelated changes.

              1. 2

                Sort of. What really matters is whether you end up with a set of tools that work effectively. For small organizations, that means polyrepos, since you don’t often have to deal with cross-cutting concerns and you don’t want to build / self-host tools.

                Once you grow to be a large organization, you start frequently making changes which require release coordination, and you have budget to setup tools to meet your needs.

          2. 4

            Interesting, Go in my experience is one of the places I have seen the most extreme polyrepo/microservice setups. I helped a small shop of 2 devs with 50+ repos. One of the devs was a new hire…

          3. 0

            Rolling out updates to a dependency organisation-wide is hard, even for compatible changes. I need to update 25 apps, make PRs for 25 apps.

            What exactly is the concern here? Project ownership within an org? I fail to see how monorepo is different from having commit access to all the repos for everyone. PRs to upstream externally? Doesn’t make a difference either.

            1. 3

              The concern is that it’s time-consuming and clumsy to push updates. If I update e.g. the database package I will need to update that for 25 individual apps, and them create and merge 25 individual PRs.

              1. 3

                The monorepo helps with this issue, but it can also be a bit insidious. The dependency is a real one and it’s one that any updates to need to be tested. It’s easier to push the update to all 25 apps in a monorepo, but it also can tend to allow developers to make updates without making sure the changes are safe everywhere.

                Explicit dependencies with a single line update to each module file can be a forcing function for testing.

                1. 2

                  but it also can tend to allow developers to make updates without making sure the changes are safe everywhere

                  The Google solution is by pushing the checking of the safety of a change onto the team consuming it, not the one creating it.

                  Changes are created using Rosie, and small commits created with a review from a best guess as to who owns the code. Some Rosie changes wait for all people to accept. Some don’t, and in general I’ve been seeing more of that. Rosie changes generally assume that if your tests pass, the change is safe. If a change is made and something got broke in your product, your unit tests needed to be better. If that break made it to staging, your integration tests needed to be better. If something got to production, you really have bigger problems.

                  I generally like this solution. I have a very strong belief that during a refactor, it is not the responsibility of the refactor author to prove to you that it works for you. It’s up to you to prove that it doesn’t via your own testing. I think this applies equally to tiny changes in your own team up to gigantic monorepo changes.

                2. 1

                  Assuming the update doesn’t contain breaking changes, shouldn’t this just happen in your CI/CD pipeline? And if it does introduce breaking changes, aren’t you going to need to update 25 individual apps anyway?

                  1. 4

                    aren’t you going to need to update 25 individual apps anyway?

                    The breaking change could be a rename, or the addition of a parameter, or something small that doesn’t require careful modifications to 25 different applications. It might even be scriptable. Compare the effort of making said changes in one repo vs 25 repos and making a PR for each such change.

                    Now, maybe this just changes the threshold at which you make breaking changes, since the cost of fixing downstream is high. But there are trade offs there too.

                    I truthfully don’t understand why we’re trying to wave away the difference in the effort required to make 25 PRs vs 1 PR. Frankly, in the way I conceptualize it, you’d be lucky if you even knew that 25 PRs were all you needed. Unless you have good tooling to tell you who all your downstream consumers are, that might not be the case at all!

                    1. 1

                      Here’s the thing: I shouldn’t need to know that there are 25PRs that have to be sent, or even 25 apps that need to be updated. That’s a dependency management problem, and that lives in my CI/CD pipeline. Each dependent should know which version(s) it can accept. If I make any breaking changes, I should make sure I alter the versioning in such a way that older dependents don’t try and use the new version. If I need them to use my new version, then I have to explicitly deprecate it.

                      I’ve worked in monorepos with multiple dependents all linking back to a single dependency, and marshalling the requirements of each of those dependents with the lifecycle of the dependency was just hell on Earth. If I’m working on the dependency, I don’t want to be responsible for the dependents at the same time. I should be able to mutate each on totally independent cycles. Changes in one shouldn’t ever require changes in the other, unless I’m explicitly deprecating the version of the dependency one dependent needs.

                      I don’t think VCS is the right place to do dependency management.

                      1. 3

                        Round and round we go. You’ve just traded one problem for another. Instead of 25 repos needing to be updated, you now might have 25 repos using completely different versions of your internal libraries.

                        I don’t want to be responsible for the dependents at the same time.

                        I mean, this is exactly the benefit of monorepos. If that doesn’t help your workflow, then monorepos ain’t gunna fly. One example where I know this doesn’t work is in a very decentralized ecosystem, like FOSS.

                        If you aren’t responsible for your dependents, then someone else will be. Five breaking changes and six months later, I feel bad for the poor sap that needs to go through the code migration to address each of the five breaking changes that you’ve now completely forgotten about just to add a new feature to that dependency. I mean sure, if that’s what your organization requires (like FOSS does), then you have to suck it up and do it. Otherwise, no, I don’t actually want to apply dependency management to every little thing.

                        Your complaints about conflating VCS and dependency management ring hollow to me.

                        1. 1

                          I mean, again, this arises from personal experience: I’ve worked on a codebase where a dependency was linked via source control. It was an absolute nightmare, and based on that experience, I reached this conclusion: dependencies are their own product.

                          I don’t think this is adding “dependency management to every little thing”, because dependency management is like CI: it’s a thing you should be doing all the time! It’s not part of the individual products, it’s part of the process. Running a self-hosted dependency resolver is like running a self-hosted build server.

                          And yes, different products might be using different versions of your libraries. Ideally, nobody pinned to a specific minor release. That’s an anti-pattern. Ideally, you carefully version known breaking changes. Ideally, your CI suite is robust enough that regressions never make it into production. I just don’t see how different versions of your library being in use is a problem. Why on Earth would I want to go to every product that uses the library and update it, excepting show-stopping, production-critical bugs? If it’s just features and performance, there’s no point. Let them use the old version.

                          1. 2

                            You didn’t really respond to this point:

                            Five breaking changes and six months later, I feel bad for the poor sap that needs to go through the code migration to address each of the five breaking changes that you’ve now completely forgotten about just to add a new feature to that dependency.

                            You ask why it’s a problem to have a bunch of different copies of your internal libraries everywhere? Because it’s legacy code. At some point, someone will have to migrate its dependents when you add a new feature. But the point at which that happens can be delayed indefinitely until the very moment at which it is required to happen. But at that point, the library may have already gone through 3 refactorings and several breaking changes. Instead of front-loading the migration of dependents as that happens by the person making the changes, you now effectively have dependents using legacy code. Subsequent updates to those dependents now potentially fall on the shoulders of someone else, and it introduces surprise yak shaves. That someone else then needs to go through and apply a migration to their code if they want to use an updated version of the library that has seen several breaking changes. That person then needs to understand the breaking changes and apply them to their dependent. If all goes well, maybe this is a painless process. But what if the migration in the library resulted in reduced functionality? Or if the API made something impossible that you were relying on? It’s a classic example of someone not understanding all of the use cases of their library and accidentally removing functionality from users of their library. Happens all the time. Now that person who is trying to use your new code needs to go and talk to you to figure out whether the library can be modified to support original functionality. You stare at them blankly for several seconds as you try to recall what it is you did 6 months ago and what motivated it. But all of that would have been avoided if you were forced to go fix the dependent in the first place.

                            Like I said, your situation might require one to do this. As I said above, which you seem to have completely ignored, FOSS is one such example of this. It’s decentralized, so you can’t realistically fix all dependents. It’s not feasible. But in a closed ecosystem inside a monorepo, your build doesn’t pass unless all dependents are fixed. Everything moves forward, code migrations are front loaded and nobody needs to spend any time being surprised by a necessary code migration.

                            I experience both of these approaches to development. With a monorepo at work and lots of participation in FOSS. In the FOSS world, the above happens all the time exactly because we have a decentralized system of libraries that are each individually versioned, all supported by semver. It’s a great thing, but it’s super costly, yet necessary.

                            Dependency management with explicit versioning is a wonderful tool, but it is costly to assign versions to things. Sometimes it’s required. If so, then great, do it. But it is most certainly not something that you “just do” like you do CI. Versioning requires some judgment about the proper granularity at which you apply it. Do you apply it to every single module? Every package? Just third party dependencies? You must have varying answers to these and there must be some process you follow that says when something should be independently versioned. All I’m saying is that if you can get away with it, it’s cheaper to make that granularity as coarse as possible.

            1. 15

              The link to the transactions page says:

              Your transactions, including deliveries and online orders, gathered from Google services like your Assistant and Gmail

              So looks like it’s just a different view on your emails. There is nothing inherently nefarious about this. If you use gmail you’re already trusting Google with your purchase details, and aggregating that data on an overview page seems like a reasonable feature. The “Reservations” feature (“Your upcoming and past reservations for flights, hotels, and events gathered from Google services, like your Assistant or Gmail”) seems even more useful!

              The issue here seems to be one of trust. You don’t trust Google does things with this data rather than present this overview to you. This is fair enough, but … why use Gmail if you don’t trust Google with this data?

              There is a link to a privacy document. What does that say? I can’t check, as this list is empty for me as I don’t use Gmail. You say that “I DO NOT want someone to use my personal data to grow their business or run analytics without my consent”, which is fair enough, but it’s not at all clear to me that this data is anything other than just a handy overview.

              1. 9

                Trust is not a binary, and consent for privacy purposes is tied to intended use.

                I suspect many people are okay with Google displaying ads based on keywords/crude email content analysis in exchange for GMail, but entirely not okay with Google analyzing email contents to extract private data out of it and then do detailed processing/data aggregation on it.

                Purchases displayed in a structured way is not just another view on existing emails. The emails have been specifically processed into structured data. There are a lot of things you could infer based on email contents that most users wouldn’t be comfortable with. To give you a related example, Uber got a lot of blowback last year, because their data collection ended up with their employees spying on celebs and people they knew to find out who is having sex with who, amongst other things.

                Having data at a company, and expecting the company not to store it in a personalized, structured, individualized way with unclear access are two very different things.

                1. 2

                  unclear access

                  I’m reasonably sure the linked privacy document should clarify that. As mentioned in me previous comment, I don’t have the link so I can’t read it.

                  I don’t disagree that processing data in to structured data can be a risk in and of itself, but the article was written in a way that makes this seem something different from what it is, and makes various unsubstantiated claims, such that Google is “using” the data in some way, that Google will “use my personal data to grow their business or run analytics”, and that “purchases we do or the credit card bank details that is being shared to make these purchases”.

                  Maybe those claims are true – I suspect they’re not, but could be wrong. However, the article has failed in demonstrating that they’re true. Instead, it seems like that author saw this overview and immediately proceeded to draw conclusions and write this without looking much further in to it. There are many details that are unclear, and those details are vastly important. I hold little love for Google (I only use their services when there’s no other good choice, like Android) but not everything Google does is some sort of plot to get at our data.

                  1. 3

                    Given that Google explicitly negotiated paid access to something like 70% of physical store purchase data in the US, combined with the fact that having this purchase history is both marginal utility to a GMail user AND very valuable to a corporation, I highly doubt that this data is only being used to provide some UI features to GMail users.

                    It is exceedingly likely that Google includes this purchase data as part of user targeting/profiling, sells the aggregate info, makes the detailed data available to “partners” and allows small enough subgroups in targeting to make individual users identifiable.

                    1. 2

                      Perhaps. But is this substantiated by the information in the privacy document?

                      1. 4

                        Why do you care about Google’s privacy document? Almost all of those are written in a way to maximize what a company lets themselves do with your data, while seeming like they don’t.

                        And even those extremely vague self-imposed limits are regularly breached across the IT sector.

                        1. 2

                          Don’t you think that serious allegations such as “Google is harvesting personal and using or selling data from emails” warrants some kind of evidence beyond “well, they stand to gain from it?”

                          I am not a fan of Google by any means, but I also think it’s important to not immediately jump to the first conclusion when there are other options, especially if it’s confirming a pre-held belief.

                          1. 4

                            Aren’t you overdoing the devil’s advocate thing a bit?

                            Your post might have been reasonable around 2013-15, but the past three years have been full of data misuse, especially and particularly 2018. As far as I’m concerned, it’s up to Google to explain in clear terms why they are collecting structured data like that and to make iron-clad legal guarantees that it is not used for any other purpose that the users haven’t consented to.

                            If you trust Google enough to give them the benefit of the doubt after all this, that’s certainly your choice.

                            1. 2

                              I don’t like Google any more than you; I’ve been careful of them for years, and haven’t used their products if it can be avoided, long before it was cool. But I’m also not going to immediately assume the worst when there are still open questions; I think that’s a good example of confirmation bias.

                              it’s up to Google to explain in clear terms why they are collecting structured data like that and to make iron-clad legal guarantees that it is not used for any other purpose that the users haven’t consented to.

                              I completely agree with that; that’s exactly why I’m asking what Google says about this data.

                2. 4

                  If someone sent you a letter and then you saw the postman outside reading your letter before giving it to you, would that be ok? You had your letters sent through the post office so you should be ok with them reading your letters.

                  The difference between delivering messages and reading/processing/extracting info from is very important.

                  1. 7

                    If the postman had for years been providing an indexing service, telling you at any point in time which email contained a word or phrase, and covering costs for the time spent by using the knowledge of what they read to pick better flyers for you, I think finding them reading your mail should be entirely unsurprising.

                    1. 1

                      This is where it all broke down, the post man is a government service, and yeah they can’t pick and choose customers, associations, or even rifle through your stuff (well they most certainly can thanks to terrorism laws).

                      Google however, is not the government, and they are free to do whatever they please, especially after you click the ‘I agree’ button. Go ahead, and try to migrate off their platform after using it for 10+ years. It’s a nightmare. I still have services and people who just insist on using my gmail instead of something that I own (well technically after the stormfront thing, its very obvious that nobody owns anything).

                      Add in those ‘public private partnerships’ and you can bet that everything is being shared.

                      1. 2

                        (well technically after the stormfront thing, its very obvious that nobody owns anything).

                        Would you like to expand on this? I’m unclear on the context.

                        1. 1

                          Stormfront is a popular neo-Nazi website (well, as “popular” as these things get anyway). Their domain registrar got fed up with them after a member killed a few people for being black. It took them a while to find a new place to host their website, as no one was willing to host, you know, literal neo-Nazis discussing how great it is to kill non-whites.

                          Daily Stormer had similar problems; they helped organize the Charlottesville neo-Nazi protests, and got kicked off their host, and also had some trouble finding a new host as few were willing to accept them.

                          So neozeed’s point presumably is, that “owning” your own domain isn’t really a guarantee for anything; you can still get kicked off the internet. If you’re not a literal neo-Nazi, then you’ve got little to fear though (personally, I am okay with that situation).

                          1. 2

                            It’s not even that they were censored; they were refused service. They are welcome to setup their own DC and buy network capacity from a common carrier (who can’t refuse).

                    2. 1

                      This is a false analogy; the postman is a human being with an understanding of what they’re reading. A script to aggregate data in a single overview isn’t.

                      1. 11

                        Its even worse. The postman likely can’t do anything with the data but Google can use it to build a profile on you and influence your purchasing decisions as well as report it all back to any government that asks for it.

                        1. 2

                          The postman likely can’t do anything with the data

                          A real person with sensitive information on another real person they know the identify of can probably do more if they really wanted to.

                          Google can use it

                          “Can use” is not the same as “actually using”.

                          influence your purchasing decisions

                          Where did you get that from? It seems quite a leap to me.

                          report it all back to any government that asks for it.

                          It’s not like Google just hands out information just because a government asks nicely. And I’m fairly sure that governments read postal mail when they consider it warranted, too.

                          1. 12

                            Where did you get that from? It seems quite a leap to me

                            I mean, Google’s entire business model is predicated on influencing purchasing decisions. They’re an advertising company. Influencing your purchasing is how they make money. It’s a bit absurd to think that they’re not going to attempt to do what their entire business is built to do, using every tool in their arsenal to do it.

                            1. 6

                              It’s not like Google just hands out information just because a government asks nicely. And I’m fairly sure that governments read postal mail when they consider it warranted, too.

                              Actually, they do. In the first half of 2018 they received almost 60000 requests from governments of which 67% were processed. You can see the details at https://transparencyreport.google.com/user-data/overview?hl=en

                              I think in most countries asking nicely is exactly what governments do. Through a standard process with a template letter signed off by some government official in which they just have to substitute your account name.

                              1. 1

                                Dont forget places like facebook & their shadow profiles. Don’t you love it when you see the ‘share/thumbs up’ buttons? You are being tracked, and profiled.

                            2. 6

                              This is the most dangerous and naive way to think in the age of AI and ML.

                              The difference between that human and that script is getting narrower and narrower.

                              In fact, the script (which is really a warehouse-size facilituy full of data processing servers) probably does a MUCH better job understanding and using that email than the human does.

                              Don’t think it is just “a script to aggregate data” - it is most likely a script to parse, understand, process, correlate, profile-build the data.

                              Remember that Gmail is free for a reason. You are the product. Your email is being used to generate money.

                          2. 3

                            So looks like it’s just a different view on your emails. There is nothing inherently nefarious about this.

                            And yet it feels different because Google has taken the time to scan the contents of my email and attempt to understand it in some way.

                            What’s interesting is that it’s nothing another email provider couldn’t do, either technically or legally - it’s just more obvious here. I suspect (based only on my experience of Google’s products) that Google are better at it than most, but there’s no reason to believe other hosted email providers aren’t doing this sort of thing.

                            1. 4

                              I’m not convinced this is even legal, Europe’s GDPR might make this quite illegal. Google et al are probably looking at multi-billion euro fines from the way these companies structured their attitude to consent and privacy.

                              1. 1

                                Quite possibly! I’m not a lawyer, or anywhere near familiar enough with the relevant laws. But I suspect the relevant question is what @arp242 hints at - does Google use this data for advertising? Or is it simply presented as a search filter on your emails (albeit one augmented by some smart content processing)?

                                I assume the former, since we already know that’s Google’s business. But would be interested to know whether that distinction would affect their legal standing.

                                1. 0

                                  IANAL

                                  The GDPR does not matter since you agreed to the Gmail Terms of Usage.

                                  The GDPR is not some magical protection for features you don’t like. That would be severely limiting to services.

                                  1. 8

                                    The GDPR does not matter since you agreed to the Gmail Terms of Usage.

                                    Yes, it does.

                                    Consent must be a specific, freely-given, plainly-worded, and unambiguous affirmation given by the data subject; an online form which has consent options structured as an opt-out selected by default is a violation of the GDPR, as the consent is not unambiguously affirmed by the user. In addition, multiple types of processing may not be “bundled” together into a single affirmation prompt, as this is not specific to each use of data, and the individual permissions are not freely-given. (Recital 32) https://en.wikipedia.org/wiki/General_Data_Protection_Regulation#Lawful_basis_for_processing

                                    If the Purchases page is legal, it’s legal because you already consented to GMail having access to, and indexing, your email. Not because it’s buried under the ToS. Consenting to having your email automatically categorized and indexed would also not implicitly give Google permission to use it for ad targetting.

                                    There’s a reason everybody’s so panicked about the GDPR. It is very, very strict.

                              2. 2

                                My question is if I delete an email, does the purchase disappear?

                                1. 1

                                  According to Reddit, yes.

                                2. 1

                                  You should read the Gmail Terms of Service instead. It will clearly explain that Google can do whatever they want with your email. According to the ToS they can even publish it or share it with “those we work with”.

                                1. 1

                                  The whole solution, the “web stack”, is wrong. The same thing could be done faster and more efficient easily—there is just so much wasted potential.

                                  I say this all the time. HTML and the DOM are terrible ways to build UIs. Terrible.

                                  1. 3

                                    Gods, for a moment I saw REMY.DAT;2 and thought I’d been writing articles about VMS while I thought I was sleeping.

                                    VMS is the best OS in history, and nothing has quite matched it. And I say that even after learning to do assembling on a VAX machine. Which was horrible, but that’s not VMS’s fault.

                                    1. 2

                                      Haha, another Remy here. Shell accounts are available on DECUS if you want to play around again

                                      1. 1

                                        What makes OpenVMS better than its competitors (mostly Unix I guess)? From the article, it seems fascinatingly different, but ultimately it just looks more complex in terms of feature count.

                                        1. 5

                                          Admittedly, it’s a bit of hyperbole. But there are a lot of things VMS did before Unix. When I used VMS, it was on a mildly large cluster. Clustering at that scale just wasn’t a thing in Unixland at the time. The filesystem is itself interesting, and the inherent versioning doesn’t make things all that much more complex.

                                          But the biggun is its binary formats. VMS had a “common language environment” which specified how languages manage the stack, registers, etc., and it meant that you could call libraries written in one language from any other language. Straight interop across languages. COBOL to C. C to FORTRAN. FORTRAN into your hand-coded assembly module.

                                          1. 3

                                            As a newbie, I notice more consistency. DCL (shell) options and syntax is the same for every program, no need to remember if it’s -h, –help or /? etc. Clustering is easy, consistent and scales. Applications don’t have to be cluster aware and you are not fighting against the cluster (as compared to Linux with keepalived, corosync, and some database or software).

                                            1. 3

                                              I’ll add on the clustering that it got pretty bulletproof over time with many running years, even 17 claimed for one. Some of its features included:

                                              1. The ability to run nodes with different ISA’s for CPU upgrades

                                              2. A distributed, lock protocol that others copied later for their clustering.

                                              3. Deadlock detection built into that.

                                              There was also the spawn vs fork debate. The UNIX crowd went with fork for its simplicity. VMS’s spawn could do extra stuff such as CPU/RAM metering and customize security privileges. The Linux ecosystem eventually adopted a pile of modifications and extensions to do that sort of thing for clouds. Way less consistent than VMS, though, with ramifications for reliability and security.

                                              EDIT: In this submission, I have a few more alternative OS’s that had advantages over UNIX. I think it still can’t touch the LISP machines on their mix of productivity, consistency, maintenance, and reliability. The Smalltalk machines at PARC had similar benefits. Those two are in a league of their own decades later.

                                          1. 3

                                            Still have an actual BeBox/133. Amazing how well BeOS ran on a dual PowerPC 603, especially since the 603 was never designed for multiprocessing. A marvel of hardware and software.

                                            1. 2

                                              Googling it, I find it was slowest of that CPU series and only one without SMP support. Im more amazed at the demo if it was on that box.

                                              1. 6

                                                On that box, you could decode multiple video streams at the same time. This is at 90s resolution, so that might not sound like much, but at the time, 1024x768 video would be a tough job for most computers, and multiple streams at once? Madness!

                                                (The first time I saw Office Space was at stellar 240p, illicitly “streamed” from a network share while I was working helldesk, which I can only assume was the way the creators intended it to be watched)

                                            1. 1

                                              From the other direction: frameworks should be designed so that you don’t have to learn the framework. The framework should be:

                                              • Obvious
                                              • A codification of current best practices
                                              • Minimal
                                              • Free of “magic” (most frameworks stumble here)
                                              • Clear leveraging of language features per best practices for that language

                                              If a framework requires boilerplate to use, then it’s already a broken framework. If it requires scaffolding tools to use, it’s also a broken framework. Sorry, I recognize that I’ve essentially said “all frameworks are bad”. I’d soften that to saying “no frameworks are good enough”. A good framework should grow out of what developers are already doing.

                                              I think this becomes such a problem in the web development space because we still don’t have a good approach to building rich UIs in the web. The web browser is such an insanely complex target for applications that there might not be a good way to build UIs in the web browser, excepting the obvious text-oriented interfaces which are what the web was built to handle.

                                              1. 7

                                                I thought the conspiracy theory folks were wrong. It’s looking like they were right. Google is indeed doing some shady stuff but I still think the outrage is overblown. It’s a browser engine and Microsoft engineers have the skill set to fork it at any point down the line. In the short term the average user gets better compatibility which seems like a win overall even if the diversity proponents are a little upset.

                                                1. 10

                                                  I thought the conspiracy theory folks were wrong. It’s looking like they were right. Google is indeed doing some shady stuff

                                                  If it’s an organization, you should always look at their incentives to know whether they have a high likelihood of going bad. Google was a for-profit companies aiming for IPO. Their model was collecting info on people (aka surveillance company). These are all incentives for them to do shady stuff. Even if they want Don’t Be Evil, the owners typically loose a lot of control over whether they do that after they IPO. That’s because boards and shareholders that want numbers to go up are in control. After IPO’s, decent companies start becoming more evil most of the time since evil is required to always make specific numbers go up or down. Bad incentives.

                                                  It’s why I push public-benefit companies, non-profits, foundations, and coops here as the best structures to use for morally-focused businesses. There’s bad things that can still happen in these models. They just naturally push organizations’ actions in less-evil directions than publicly-traded, for-profit companies or VC companies trying to become them. I strongly advise against paying for or contributing to products of the latter unless protections are built-in for the users with regards to lock-in and their data. An example would be core product open-sourced with a patent grant.

                                                  1. 9

                                                    Capitalism (or if you prefer, economics) isn’t a “conspiracy theory”. Neither is rudimentary business strategy. It’s amusing to me how many smart, competent, highly educated technical people fail so completely to understand these things, and come up with all kinds of fanciful stories to bridge the gap. Stories about the role and purpose of the W3C, for instance.

                                                    Having read all these hand-wringy threads about implementation diversity in the wake of this EdgeHTML move, I wonder how many would complain about, say, the lack of a competitor to the Linux kernel? There’s only one kernel, it’s financially supported by numerous mutually distrustful big businesses and used by nearly everybody, its arbitrary decisions about its API are de-facto hard standards… and yet I don’t hear much wailing and gnashing, even from the BSD folks. How is the linux kernel different than Chromium?

                                                    1. 16

                                                      While I actually am concerned about a lack of diversity in server-side infrastructure, the Linux kernel benefits, as it were, from fragmentation.

                                                      There’s only one kernel

                                                      This simply isn’t true. There’s only one development effort to contribute to the kernel. There is, on the other hand, many branches of the kernel tuned to different needs. As somebody who spent his entire day at work today mixing and matching different kernel variants and kernel modules to finally get something to work, I’m painfully aware of the fragmentation.

                                                      There’s another big difference, though, and that’s in leadership. Chromium is run by Google. It’s open source, sure, but if you want your commits into Chromium, it’s gonna go through Google. The documentation for how to contribute is littered with Google-specific terminology, down to including the special internal “go” links that only Google employees can use.

                                                      Linux is run by a non-profit. Sure, they take money from big companies. And yes, money can certainly be a corrupting influence. But because Linux is developed in public, a great deal of that corruption can be called out before it escalates. There have been more than a few developer holy wars over perceived corruption in the Linux kernel, down to allowing it to be “tainted” with closed source drivers. The GPL and the underlying philosophy of free software helps prevent and manage those kinds of attacks against the organization. Also, Linux takes money from multiple companies, many of which are in competition with each other. It is in Linux’s best interest to not provide competitive leverage to any singular entity, and instead focus on being the best OS it can be.

                                                      1. 3

                                                        Performance tuning is qualitatively different than ABI compatibility. Otherwise, I think you make some great points. Thanks!

                                                      2. 7

                                                        If there is an internal memo at Google along the lines of “try to break the other web browsers’ perf as much as possible” that is not “rudimentary business strategy”, it’s “ground for anti-trust action”.

                                                        It’s as good of a strategy as helping the Malaysian PM launder money and getting a 10% cut (which… hey might still pay off)

                                                        1. 5

                                                          Main difference is that there are many interoperable implementations of *nix/SUS/POSIX libc/syscall parts and glibc+Linux is only one. A very popular one, but certainly not the only. Software that runs on all (or most) *nix variants is incredibly common, and when something is gratuitously incompatible (by being glibc+Linux or MacOS only) you do hear the others complain.

                                                          1. 2

                                                            Software that runs on all (or most) *nix variants is incredibly common

                                                            If by “runs on” you mean “can be ported to and recompiled without major effort”, then I agree, and you’re absolutely right to point out the other parts of the POSIX and libc ecosystem that makes this possible. But I can’t think of any software that’s binary compatible between different POSIX-ish OSs. I doubt that’s even possible.

                                                            On the other side of the analogy, in fairness, complex commerical web apps have long supported various incompatible quirks of multiple vendor’s browsers.

                                                            1. 6

                                                              Multiple OSs, including Windows, can run unmodified Linux binaries.

                                                          2. 4

                                                            How is the linux kernel different than Chromium?

                                                            As you just said it,

                                                            financially supported by numerous mutually distrustful big businesses

                                                            There’s no one company making decisions about the kernel. That’s the difference.

                                                            1. 4

                                                              There’s no one company making decisions about the kernel. That’s the difference.

                                                              Here comes fuchsia and Google’s money :/

                                                            2. 1

                                                              I am disgusted with the Linux monoculture (and the Linux kernel in general), even more so than with the Chrome monoculture. But that fight was fought a couple decades ago, it’s kinda late to be complaining about it. These complaints won’t be heard, and even if they are heard, nobody cares. The few who care are hardly enough to make a difference. Yes we have the BSDs (and I use one) and they’re in a minority position, kinda like Firefox…

                                                              1. 2

                                                                How much of a monoculture is Linux, really? Every distro tweaks the kernel at least to some extent, there are a lot of patch sets for it in the open, and if you install a distro you get to choose your tools from the window manager onwards.

                                                                The corporatization of Linux is IMO problematic. Linus hasn’t sent that many angry emails percentually, but they make the headlines every time, so my conspiracy theory is that the corporations that paid big bucks for board seats on the Foundation bullied him to take his break.

                                                                We know that some kernel decisions have been made in the interest of corporations that employ maintainers, so this could be the tip of an iceberg.

                                                                Like the old Finnish saying “you sing his songs whose bread you eat”.

                                                            3. 3

                                                              It’s a browser engine and Microsoft engineers have the skill set to fork it at any point down the line.

                                                              I think this is true. If Google screws us over with Chrome, we can switch to Firefox, Vivaldi, Opera, Brave etc and still have an acceptable computing experience.

                                                              The real concerns for technological freedom today are Google’s web application dominance and hardware dominance from Intel. It would be very difficult to get a usable phone or personal server or navigation software etc without the blessing of Google and Intel. This is where we need more alternatives and more open systems.

                                                              Right now if Google or Intel wants to, they can make your life really hard.

                                                              1. 8

                                                                Do note that all but Firefox are somewhat controlled by Google.

                                                                Chrome would probably have been easier to subvert if it wasn’t open source; now it’s a kind of cancer in most “alternative” browsers.

                                                                1. 5

                                                                  I don’t know. MIPS is open sourcing their hardware and there’s also RISC-V. I think the issue is that as programmers and engineers we don’t collectively have the willpower to make these big organizations behave because defecting is advantageous. Join the union and have moral superiority or be a mercenary and get showered with cash. Right now everyone chooses cash and as long as this is the case large corporations will continue to press their advantage.

                                                                  1. 9

                                                                    “Join the union and have moral superiority or be a mercenary and get showered with cash. Right now everyone chooses cash and as long as this is the case large corporations will continue to press their advantage.”

                                                                    Boom. You nailed it! I’ve been calling it out in threads on politics and business practices. Most of the time, people that say they’re about specific things will ignore them for money or try to rationalize how supporting it is good due to other benefits they can achieve within the corruption. Human nature. You’re also bringing in organizations representing developers to get better pay, benefits, and so on. Developers are ignoring doing that more than creatives in some other fields.

                                                                    1. 3

                                                                      Yup. I’m not saying becoming organized will solve all problems. At the end of the day all I want is ethics and professional codes of conduct that have some teeth. But I think the game is rigged against this happening.

                                                                    2. 2

                                                                      I don’t think RISC-V is ready for general purpose use. Some CPUs have been manufactured, but it would be difficult to buy a laptop or phone that carries one. I also think that manufacturing options are too limited. Acceptable CPUs can come from maybe Intel and TSMC and who knows what code/sub-sytems they insert into those.

                                                                      This area needs to be more like LibreOffice vs Microsoft Office vs Google Docs vs others on Linux vs Windows vs MacOS vs others

                                                                    3. 2

                                                                      They already are screwing us over with chrome, this occurrence is evidence of that.

                                                                  1. 4

                                                                    You can, but should you?

                                                                    1. 13

                                                                      It depends!

                                                                      For something like sum(), average(), max(), or the like, the answer’s an emphatic yes: despite doing “more” on the SQL server, you’ll actually consume less RAM (tracking a single datum of a sum/average/max/min, rather than building up a full response list) and send less over the network. That’s a definite win/win.

                                                                      For some of the other stuff in the article, the answer’s murkier. String operations in general, including the ones they’re mentioning (e.g. GROUP_CONCAT, which is specific to MySQL, but has equivalents in other databases) are fast in MySQL, but not necessarily in other databases. On the reverse side, some complex but amazingly useful queries (such as subselects) that are fast in SQLite and PostgreSQL are slow in MySQL, because MySQL’s design requires the generation of temporary tables (or at least still did as of roughly a year ago). PostgreSQL can likewise do some complex JSON ops server-side highly efficiently, and SQL Server can do similar stuff for XML, but I’m not sure I’d recommend doing that in, say, SQLite, even if you mechanically can (via extension methods), because it can defeat the query optimizer if not very carefully implemented.

                                                                      So: should you? Sometimes! Unfortunately, you need to learn your database to know the answer.

                                                                      1. 3

                                                                        Also, speed concerns aside, declarative languages are nice to read and write, as the author points out at the beginning of the article.

                                                                        1. 2

                                                                          String operations in particular seem to vary widely in how easy or hard they are between database engines. I remember being amazed at how few string functions MS SQL Server has - IIRC, substring and indexOf and that’s about it. Super clumsy to do anything but the most basic things. On the other hand, PostgreSQL has a full regexp find and replace engine and enough functions to do just about anything you could want.

                                                                        2. 4

                                                                          Another benefit is that you can put all your SQL code into stored procedures, and then the rest of your code can just call that functionality. Use multiple languages and it’s still all standardized in your stored procedures.

                                                                          1. 3

                                                                            Aren’t stores procedures a config management night mare? As in you distribute your business logic between code and and the db instance.

                                                                            1. 4

                                                                              I would argue that no business logic belongs in the stored procedures, but I’d further argue that making statements about your relations isn’t business logic. If you’re using a database as a source of truth, that’s basically all it should be worrying about: what is true about your domain, and what may be true about your domain (constraints).

                                                                              As for versioning- no, stored procedures aren’t a config management nightmare. It’s just that few organizations bothered to put any config management around them for decades. It’s not hard to implement versioned update scripts which roll the database schema forward or backwards. Honestly, it’s easier than some deployment solutions I’ve seen for application code.

                                                                            2. 2

                                                                              Dealing with a code base that uses the pattern of lots of logic in stored procedures being called by non-SQL code, this can go… too far. I am sad when I see stored procedures with multiple case clauses, calling other stored procedures, recursively calling themselves, making literal strings that are eval’d as other stored procedures and SO MANY CURSORS, and there’s no point in a test on the calling code because it doesn’t do anything.

                                                                              So… there are pros and cons.

                                                                              1. 2

                                                                                making literal strings that are eval’d as other stored procedures

                                                                                Other than that, it sounds perfectly sensible to me.

                                                                                there’s no point in a test on the calling code

                                                                                You could still test whether the functionality works or not, right?

                                                                            3. 2

                                                                              If there are going to be multiple consumers of the data, then it’s probably a good idea to make sure that the data as stored in the DBMS is correct under the definition of correctness you’re using. That can be a strong reason to pull much of the logic around the data into the DBMS itself. There are other concerns that pull in the opposite direction, of course, and perhaps you simply are using your database as a simple persistence layer. But if you expect ad-hoc reporting, for instance, or systems that depend on the data in the database being canonical, you are definitely a strong candidate for moving the logic into the DBMS.

                                                                              1. 1

                                                                                For small data that you are more frequently reading than writing (and you care about performance), you shouldn’t.

                                                                                Because then, it makes sense to cache it. Read everything once on a blue moon (whenever it changes) instead of doing many small reads in your fast path. Your own internal datastructures will easily outperform SQL: Simple lookups are the low-hanging fruit; then if you need the power of a relational database, I made a simple binary search based framework for left-joining tables of tuples in C++, and so can you.

                                                                              1. 14

                                                                                I’ve seen a few things on the internet (including here at Lobsters) saying, essentially, “Please, use Firefox out of concern for the ecosystem, even if it’s worse than the alternatives at <thing you care about>.” I do use Firefox, and have for the last year, but this rankles me a bit. I realize that Mozilla is (partially) a non-profit, and that even a for-profit corporation can’t do everything, but if you visit Firefox’s Bugzilla you can find tickets for obvious features that have been open for years. Here’s one that’s been open since April 2013 and which is still unassigned.

                                                                                Part of this is a PR/communication problem; Firefox is at a bit of a disadvantage in that we can all see a list of the things they are or aren’t working on right now. But every time Firefox gains a new feature that I don’t care about, I think about all of these tickets that have been open forever and have lots of comments and duplicates but which Mozilla has chosen not to work on.

                                                                                1. 10

                                                                                  Honestly, if we’re concerned about the web ecosystem, people should use Lynx more. The web would be better if most pages had to work in Lynx.

                                                                                  Edit: Which, after checking, Lobste.rs is very readable on Lynx, and you can login just fine. Unfortunately, the reply/edit links don’t work.

                                                                                  1. 4

                                                                                    I do actually know people using the web with lynx frequently (It’s a very nice browser for blind people used to the command line).

                                                                                    1. 1

                                                                                      Just so people don’t get the wrong idea, I should mention that the majority of blind people who use computers have been productively using GUIs for a couple of decades now. Yes, there are blind people who are more comfortable with the command line and even screen-oriented terminal-based applications like Lynx, but they’re a small and shrinking minority of a minority, and I would guess that even they have given in and started using a JavaScript-capable browser when needed. There is certainly not an economic barrier anymore. So don’t feel that you need to accommodate them.

                                                                                      1. 3

                                                                                        So a blind person who uses lynx should not be accommodated, because the barriers to switching to GUIs are not economic? If it’s harder for them to learn to use a GUI than to keep using lynx, why shouldn’t they be accommodated?

                                                                                    2. 3

                                                                                      Yup. I read lobste.rs in links and have to pop over to Firefox to reply. You can post a new comment, though.

                                                                                      1. 2

                                                                                        Lynx (and Links in text mode) are great options. If you still want graphics, but not the latest JS / CSS fads, maybe try Dillo or NetSurf. All these have independent rendering engines. The parts of the web that don’t work in simple browsers are largely the ones I can do without.

                                                                                      2. 10

                                                                                        I know its a pain when your pet issue doesn’t get fixed but really, bookmarklets.. I have never heard anyone irl ever mention one. 99.9% of web users probably don’t know what they are so I am betting when a mozilla dev has to choose what issue to work on there will be a lot more high priority tasks than fixing bookmarklets. I have had some real blocking issues with some new web features not working exactly to spec in firefox and I have seen a lot of them get fixed in reasonable timeframes because they affect more users.

                                                                                        1. 8

                                                                                          Yeah, but that’s not what they were doing. Mozilla invested all kinds of time into projects people weren’t demanding while not fixing problems their existing users were reporting. That’s not a good way to run a business if you have one product with serious competition. Gotta keep making that product the best it can be along every attribute.

                                                                                        2. 6

                                                                                          Heh, I should have guessed that was the bookmarklet bug. It is weird that a browser that supposedly empowers the user allows remote sites to dictate what code you’re allowed to run.

                                                                                        1. 5

                                                                                          git merge --squash is one of my key tools. Do a pile of work in a local branch, committing willy-nilly. Once the unit of work is done, merge it into a shared branch with the --squash flag.

                                                                                          1. 13

                                                                                            but Electron is without question a scourge

                                                                                            Well, how about a big F* you?

                                                                                            Sorry for the swear words, but while I agree that in general Electron is not great, calling it a scourge is incredibly offensive towards those who chose to develop with Electron, and have good reasons for it. Sometimes writing native apps is cost-prohibitive, and you’re better off with an Electron app that looks a bit out of place, than have no app at all. It’s cool for you to be smug and elitist and complain about stuff not following the HIG on every single platform the app is developed for, but have you ever thought about the cost of doing so? Yeah, big companies may be able to shell out enough money and pay developers to create native apps and follow the platform’s HIG, but not everyone’s a big business. I dare say the vast majority of app developers aren’t. By hating on Electron and anyone who doesn’t polish their app perfectly, you’re alienating a whole lot of developers.

                                                                                            Learn to see the other side of the coin already.

                                                                                            (I ranted about this on my blog recently, also explaining why Electron was chosen over developing native apps.)

                                                                                            1. 37

                                                                                              complain about stuff not following the HIG on every single platform the app is developed for

                                                                                              Do you know why human interface guidelines exist? They exist because humanity is imperfect and accessibility is really important.

                                                                                              Electron apps are not accessible. They’re often completely opaque to a screen reader, they’re often completely opaque to assistive speech recognition software, they often don’t respond properly to keyboard navigation and if they do, they don’t behave as expected. They don’t often respond properly to text scaling, they don’t understand system-wide options like increased contrast or reduce motion.

                                                                                              To whole classes of users, your Electron app is worthless and unusable. What are we supposed to do? Congratulate you on your accomplishment? You need to shrink your ego and take some time to understand real world users, rather than throwing words around like “smug” and “elitist”.

                                                                                              Also your blog post doesn’t even mention the word “accessibility” once. How disappointing.

                                                                                              By hating on Electron and anyone who doesn’t polish their app perfectly, you’re alienating a whole lot of developers.

                                                                                              At the risk of sounding controversial, what does it matter if we alienate some developers? Developers should do better. They need to do better. I’m fine with alienating developers for good reasons.

                                                                                              Electron is not the answer. It’s a shortcut at best and a bandaid at worst. This isn’t a secret, so there’s really no point in acting surprised that people don’t agree with your choices.

                                                                                              1. 10

                                                                                                I think that we who care about accessibility need to avoid taking a condemning, vitriolic tone, and meet developers where they are. That way, we’ll be more likely to get results, rather than just alienating a large and growing group of people. It’s true that Electron apps often have accessibility problems. But I believe we can improve that situation without calling for people to throw out Electron entirely. Frankly, we need access to these apps more than these developers need us. So we have to work with what we’ve got.

                                                                                                No, I haven’t always lived up to this ideal when communicating with mainstream developers, and I’m sorry about that.

                                                                                                1. 14

                                                                                                  I think that we who care about accessibility need to avoid taking a condemning, vitriolic tone, and meet developers where they are.

                                                                                                  The problem is that these developers don’t want to meet anywhere else - that’s how we arrived at Electron apps in the first place. It’s the easy way out.

                                                                                                  1. 4

                                                                                                    Most trends in IT are driven by herd mentality, familiarity, leveraging ecosystems, and marketing by companies. All of these seem to contribute to Electron use just like they contributed to Java and .NET getting big. They sucked, too, compared to some prior languages. So, it’s best to identify what they’re using within what ecosystems to find an alternative that’s better while still being familiar.

                                                                                                    Then, see how many move to it or don’t for whatever reasons. Iterate from there.

                                                                                                    1. 1

                                                                                                      Developers don’t want to listen because by the time they start publishing something (based on Electron, for a number of reasons), what they meet with is pure, unconditional hate towards the technology (and not just the tech! towards developers using said tech, too!) that enabled them. It is not surprising they don’t want to listen to the haters anymore.

                                                                                                      You’re alienating new developers with this kind of mentality too, who would be willing to listen to your concerns. You’re alienating them, because of the sins of their fathers, so to say. I’m not surprised noone cares about accessibility, to be honest. When we’re told the world would be better off without developers like us, we’re not going to be interested in working towards better accessibility.

                                                                                                  2. 5

                                                                                                    Do you know why human interface guidelines exist? They exist because humanity is imperfect and accessibility is really important.

                                                                                                    I’m aware, thank you. I’m very much aware that Electron is not… great, for many reasons. That’s still not a reason to unconditionally call it a scourge.

                                                                                                    To whole classes of users, your Electron app is worthless and unusable. What are we supposed to do? Congratulate you on your accomplishment? You need to shrink your ego and take some time to understand real world users, rather than throwing words around like “smug” and “elitist”.

                                                                                                    You, like the article author, ignore circumstances, and generalize. Yes, my electron app is going to be useless for anyone using a screen reader. It will be useless for a whole lot of people. However, it will exist, and hundreds of people will be able to use it. Without Electron, it wouldn’t exist. So ask yourself this, which is better: an application that is not usable by some people, but makes the life of the vast majority of its intended audience easier; or an application that does not exist?

                                                                                                    Here’s the situation: there’s a keyboard with open source firmware. Right now, to change the layout, you need to edit the firmware source, compile a new one, and upload it to your keyboard. While we tried to make the process easy, it’s… not very friendly, and never going to be. So I’m building an application that lets you do this from a GUI, with no need for a compiler or anything else but the app itself. I develop on Linux, because that’s what I have most experience with. Our customers are usually on Windows or Mac, though. With Electron, I was able to create a useful application, that helps users. Without it, if I had to go native, I wouldn’t even start, because I lack the time and resources to go that route. For people that can’t use the Electron app, there are other ways to tweak their keyboard. The protocol the GUI talks can be implemented by any other app too (I have an Emacs package that talks to it, too). So people who can’t use the Electron app, have other choices.

                                                                                                    So, thank you, I do understand real world users. That is why I chose Electron. Because I made my due diligence, and concluded that despite all its shortcomings, Electron is still my best bet. Stop smugly throwing around Electron hate when you haven’t considered the circumstances.

                                                                                                    At the risk of sounding controversial, what does it matter if we alienate some developers? Developers should do better. They need to do better. I’m fine with alienating developers for good reasons.

                                                                                                    Well, for one, a lot of our customers would be deeply disappointed if they weren’t able to use the GUI configurator I built on Electron. “Developers should do better”. Well, come here and do my job then. Get the same functionality into the hands of customers without using Electron. I’ll wait (they won’t).

                                                                                                    Electron is not the answer. It’s a shortcut at best and a bandaid at worst. This isn’t a secret, so there’s really no point in acting surprised that people don’t agree with your choices.

                                                                                                    I agree it is not the best, and I’m not surprised people disagree with my use of it. I can even respect that, and have no problems with it. What I have problems with, is people calling Electron a scourge, and asserting that native is always best, and that anyone who doesn’t follow the HIG of a given platform “should do better”. I have a problem with people ignoring any and all circumstances, the reason why Electron was chosen for a particular product, and unconditionally proclaiming that the developers “should do better”. I have a problem with people who assert that alienating developers because they don’t (or can’t) write apps that match their idealistic desires is acceptable.

                                                                                                    Before writing off something completely, consider the circumstances, the whys. You may be surprised. You see, life is full of compromises, and so is software development. Sometimes you have to sacrifice accessibility, or native feel, or what have you, in order to ship something to the majority of your customers. I’d love to be able to support everyone, and write lighter, better apps, but I do not have the resources. People calling the technology that enables what I do a scourge, hurts. People asserting that I should do better, hurts.

                                                                                                    Until people stop ignoring these, I will call them elitist and smug. Because they assume others that chose Electron have the privilege of being able to chose something else, the resources to “do better”. Most often, they do not.

                                                                                                    1. 1

                                                                                                      Electron apps are not accessible. They’re often completely opaque to a screen reader, they’re often completely opaque to assistive speech recognition software, they often don’t respond properly to keyboard navigation and if they do, they don’t behave as expected.

                                                                                                      I haven’t written an electron app, but I’ve done a fair share of web UI programming with accessibility in mind. You communicate with platform accessibility APIs through web APIs. It’s not technically complicated, but it does require some domain expertise.

                                                                                                      Does it work the same way on electron?

                                                                                                      1. 8

                                                                                                        Electron embeds chromium, which doesn’t connect to the native a11y APIs (MS are working to fix this on windows).

                                                                                                        As a result electron apps are as inaccessible as chrome (screenreader users tend to use IE, safari or firefox).

                                                                                                        1. 3

                                                                                                          Huh, this is a real surprise if true.. i tested our web UI with screen readers across macOS and windows in chrome, FF, and IE, and the only problems that occurred were due to bad markup or the occasional platform bug. Reaching the accessibility API was not a problem i ran into with chrome

                                                                                                          1. 2

                                                                                                            Chrome is definitely accessible to screen readers. I’ve spent more time that I’d like getting JAWS to read consistently across IE, FF and Chrome. From memory, Chrome was generally the most well behaved.

                                                                                                      2. 7

                                                                                                        When Apple was first developing the Mac, they did a lot of research into human/computer interaction and one of the results was to apply a consistent interface on the system as a whole. This lead to every app having the same menu structure (for the most part, the system menu (the Apple logo) was first, “File” next, then “Edit”) and under these standard menus, the actions were largly the same and in the same order. This would lower training costs and if a user found themselves in a new app, they could at least expect some consistency in actions.

                                                                                                        I’ve been using Linux as a desktop since the mid-90s (and Unix in general since 1989) and to say there’s a lack of consistency in UI is an understatement. Some apps have a menu bar at the top of the window (Macs menu bars are always at the top of the screen per Fitt’s Law), some you have to hold Ctrl down and press the mouse button, some the right mouse button will cause a pop-up menu. I’ve been able to navigate these inconsistencies, but I’m still annoyed by them.

                                                                                                        Further more, I’m used to the CLI, and yet even there, the general UI (the command set, the options to each command) still surprises me. I wrote about the consistency of GUIs and the inconsistencies I found on the Unix CLI over the years and while I still prefer the CLI over the GUI [1], I can see where the consistency of the Mac GUI makes for a much better experience for many.

                                                                                                        As I’ve stated, I think it’s wonderful that PHP has enabled people to create the dynamic websites they envision, but I wouldn’t want to use the resulting code personally.

                                                                                                        [1] One can program the CLI to do repetitive tasks much easier than one can do the same for any of today’s GUIs. There have been some attempts over the years to script the GUI (Rexx, AppleTalk) but it still takes more deliberate action than just writing a for loop at the command prompt for a one-off type of job.

                                                                                                        1. 5

                                                                                                          I think the case where it makes sense to go Electron is not the point of Gruber’s rant. The point was that many, many developers today are easier with writing Electron app and using it on all platforms instead of putting time and effort into polished Cocoa apps.

                                                                                                          The core of this blog post is how Mac really was different in terms of UI/UX. During the time I started using Mac (10.5) it was really differentiating itself by having “different”, “better looking and feeling” apps. Electron definitely made Mac feel less unique. Critics were pointed towards macOS app developers. Don’t get so offended by simple blogpost. Your reasons are fine, but that simply isn’t the case most of the time. Most people decide to go with electron because of plethora of mediocre JS devs, that can chunk out a lot of code that does something, and then you get slow junk like UX. In minds of 2000s Apple fans that is a big no.

                                                                                                          Have a nice day, and move on.

                                                                                                          1. 7

                                                                                                            I think the case where it makes sense to go Electron is not the point of Gruber’s rant.

                                                                                                            Correct.

                                                                                                            The point was that many, many developers today are easier with writing Electron app

                                                                                                            Incorrect.

                                                                                                            Please just read the article.

                                                                                                            His point is what he says: it is bad news for the Mac platform that un-Mac-like apps far worse than those that were roundly rejected 15 years ago are now tolerated by today’s Mac users.

                                                                                                            It happens to be the case that Electron is the technology of choice for multiple prominent sub-par apps; that’s a simple statement of fact. (It also isn’t purely coincidental, which is why I agree with his characterisation of Electron as a scourge. If someone like @algernon who builds apps for Electron is bent on interpreting those statements as a judgement of their own personal merit, well… be my guest?) But Electron is not singled out: Marzipan gets a mention in the same vein. On top of that, Gruber also points out the new Mac App Store app, which uses neither. The particular technologies or their individual merits are not his point.

                                                                                                            His point is, again, that a Mac userbase which doesn’t care about consistency spells trouble for the Mac platform.

                                                                                                            1. 2

                                                                                                              Marzipan gets a mention in the same vein.

                                                                                                              Electron is the only one that gets called a scourge, and is singled out in the very beginning. It’s even in the title. It’s even the first sentence, which then continues: “because the Mac is the platform that attracts people who care”

                                                                                                              If that’s not an elitist smug, I haven’t seen any.

                                                                                                              1. 6

                                                                                                                Once upon a time, Mac users were a ridiculed minority. In those days, Microsoft-powered PCs were better in just about every way. They were much faster, they had a more technically advanced OS (even crappy Win95 was far ahead of MacOS Classic), they had more applications, they had boatloads of games… just about every reason to pick one computer over another pointed in the direction of a Microsoft PC. You had to be special kind of kook to want a Mac regardless. It was inferior to a PC in basically every dimension. The one reason to pick a Mac over the PC was the depth of consistency and care in the UI design of its software. Only users who cared about that enough to accept the mile-long list of tradeoffs went for the Mac.

                                                                                                                1. 1

                                                                                                                  Elitism is often a good thing. It’s how we get from the mundane to the truly excellent.

                                                                                                              2. 3

                                                                                                                The point was that many, many developers today are easier with writing Electron app and using it on all platforms instead of putting time and effort into polished Cocoa apps.

                                                                                                                My beef is not with the author wishing for apps that would look more native - I share the same wish. My beef is with him calling Electron “without a question a scourge”. How about I said MacOS is without a question a scourge, for it jails you in its walled garden? You’d be rightly upset.

                                                                                                                There’s a big difference between wishing apps would be more polished on a particular platform, and between calling a technology (and by extension, developers who chose to use it) a scourge. It reeks from privileged elitism, and failure to understand why people go with Electron.

                                                                                                                Most people decide to go with electron because of plethora of mediocre JS devs, that can chunk out a lot of code that does something

                                                                                                                No. Most people decide to go with Electron because it provides a much better cross-platform environment than anything else. Please don’t call a whole bunch of people “mediocre JS devs”, unless you have solid data to back that up. Just because it is JS and “web stuff” doesn’t mean the people who develop it are any less smarter than native app developers. Can we stop this “only mediocre people write JS/PHP/whatever” bullshit?

                                                                                                                1. 13

                                                                                                                  There are more bad developers writing webshit because there are more devs writing webshit period.

                                                                                                                  Native apps tend to outperform Electron apps and use less memory, because to do the same things that don’t bring in a browser and language runtime.

                                                                                                                  Elitism is, in this case, warranted. The only really performant app (usually) in Electron I’ve seen is VSCode, because MS really does have sharp people working in a domain they’ve been leaders in for decades.

                                                                                                                  1. 7

                                                                                                                    There seems to be a shift towards less attention paid, and value given, to the experience of the user. This makes me very sad.

                                                                                                                    When people talk about why they use Electron, they always phrase it in terms of “developer productivity”, and that’s where I find the most elitist bullshit to be. Developers talk about using Electron so they didn’t have to learn a new platform, or so they only had to test it in one place, or it was faster. They talk about lower development costs (which they wildly overstate, in my experience).

                                                                                                                    But the questions I’d like use to start asking: what are the costs of the shit user experience? What are the costs of people having to learn new tools that don’t behave quite like the others? When we save money and time on development that money and time is saved once, but when we save time for our users, it’s saved repeatedly.

                                                                                                                    Maybe calling Electron shit is elitist bullshit. But I’ll take that over having contempt for one’s users.

                                                                                                                    1. 3

                                                                                                                      Contempt is a strong word. Would all these people be users in the first place if the app doesn’t exist for their platform? Go ahead and write a native Cocoa app for OSX, but that sure feels like contempt for Windows or Linux users. “Buy a new machine to use my stuff” vs. “deal with menus in the wrong order”?

                                                                                                                      1. 1

                                                                                                                        I never said “buy a new machine to use my stuff.”

                                                                                                                        From extensive experience: for most small applications, I can develop them natively on Mac, Windows, and Linux* faster than someone can develop the same thing with similar quality using a cross platform thing.

                                                                                                                        (*) “native” on Linux is less of a sticky thing that on Mac and Windows.

                                                                                                                        1. 3

                                                                                                                          Here’s a challenge for you: https://github.com/keyboardio/chrysalis-bundle-keyboardio (demo here).

                                                                                                                          Go do something like that natively for Mac and Windows. It’s a small app, some 3700 lines of JS code with comments. You do that, and I promise I’ll never write an Electron app ever again. You can make the world a better place!

                                                                                                                          1. 1

                                                                                                                            Thank you for your interest in my consulting services.

                                                                                                                            1. 4

                                                                                                                              Thought so.

                                                                                                                              FWIW, the app, like many Electron apps, were originally built in my unpaid free time. Complain about Electron apps once you built the same stuff under the same conditions.

                                                                                                                  2. 9

                                                                                                                    How about I said MacOS is without a question a scourge, for it jails you in its walled garden? You’d be rightly upset.

                                                                                                                    I wouldn’t. I might point out that you absolutely can bypass their walled garden, on MacOS, but those objections for iOS are not only valid, but are honestly concerning.

                                                                                                                    Electron is a scourge. iOS is a scourge. Facebook is a scourge. The feature creep within the browser is absolutely a scourge. There are loads of scourges. This is not an exhaustive list. I pray daily for a solar flare which delivers enough of an EMP that it utterly destroys the entire technology landscape, and gives us an opportunity to rebuild it from the invention of fire onwards, because we’ve fucked up, and our technology is bad.

                                                                                                                    And I say this because I want technology to be better. The Web is an awfully complicated way to render a UI. Our systems are overly dependent on a few corporations who don’t have our best interests at heart. Our computers are slower to do less work than they did two decades ago. Pretty much the only way anybody makes money in tech anymore is by abusing their users (see: Google, Facebook, etc). Or their employees (see: Uber). Or both (see: Amazon).

                                                                                                                    1. 1

                                                                                                                      In the EMP scenario we’d be too busy trying to get essentials back up to care about doing it right

                                                                                                                    2. 7

                                                                                                                      You’d be rightly upset.

                                                                                                                      No, I wouldn’t be rightly upset. I am not the technologies I use, and neither is that true for you.

                                                                                                                      Electron is the technology used in multiple highly prominent applications that are written with little or no regard to platform conventions. Their success is bad for the native platforms. Those are statements of fact. If you have good reasons to use Electron, then there is no need for you to relate those facts to yourself and take them as a statement of your merits as an individual.

                                                                                                                      1. 7

                                                                                                                        Agree. The notion that your identity is somehow linked with the tools you use is toxic. It is fine to like your tools, but once you get comfy with them you should be pushing beyond them to expand your taste.

                                                                                                                        1. 5

                                                                                                                          your identity is somehow linked with the tools you use

                                                                                                                          I used QBasic, Visual Basic 6, and later FreeBASIC. If my tools define me, I feel like such a shallow person with no depth or skill. Such a sinking feeling. I think I’m going to re-install SPARK Ada and buy Matlab to feel like a mathematician. Yeah, that will be better… ;)

                                                                                                                  3. 4

                                                                                                                    Thanks for sharing the blog post.

                                                                                                                    I think your case is quite different from, say, Slack. I have worked on major cross-platform apps at a company, and it’s not a huge deal, when you have a few people working on it, whose collective knowledge covers those platforms. All apps used the same core libraries for the non-UI parts, and each app added native UI on top. A company with tens, or even hundreds of well-paid developers should be able to do that, if they care at all about accessibility, performance (which is a different kind of accessibility issue,) resource usage, and all those things.

                                                                                                                    1. 2

                                                                                                                      It is still a big deal, even for a larger company, because there’s a huge difference between employing N developers to develop a single cross-platform application (with some of them specializing in one platform or the other), and between employing a set of developers to create native applications, and a set for the core libraries. There may be overlap between them, but chances are that someone who’s good at developing for Windows or OSX would not be happy with developing for Linux. So you end up employing more people, paying more, diverging UIs, for what? Paying customers will use the Electron app just as well, so what’s the point of going native and increasing costs?

                                                                                                                      Yeah, they should be able to do that, yes, it would improve the user experience, yes, it would be better in almost every possible way. Yet, the benefits for the company are miniscule, most of the time. In the case of Slack, for example, or twitter, a uniform experience across devices is much more important than native feel. It’s easier to document, easier to troubleshoot, and easier for people who hop between devices: it’s the same everywhere. That’s quite a big benefit, but goes very much against making the apps feel native. And if you forego native feel, but still develop a native application that looks and behaves like on any other platform (good luck with that, by the way), then the benefits of native boil down to being less resource hungry. In the vast majority of cases, that is simply not worth the cost of the development and maintenance burden.

                                                                                                                      In the age of mobile devices, I do not feel that apps looking “native” is any benefit at all, but that’s a different topic.

                                                                                                                      1. 5

                                                                                                                        I built them in the past myself. I’ve read write-ups about what it takes for others. If designing program right, most of the code is shared between the different platforms. The things that are different are mostly in the front-end that calls the common code. A lot of that can be automated to a degree, too, after design is laid out. For main three platforms, it basically took a max of three people two of whom only worked on UI stuff here and there mostly focused on shared code. That isn’t the minimum either: it can be lower if you have 1-2 developers that are experts in more than one platform. In mobile, many people probably know both iOS and Android.

                                                                                                                        The cross-platform part will be a small part of the app’s overall cost in most cases. It will mostly be in design/UI, too. That’s worth investing in anyway, though. :)

                                                                                                                        1. 5

                                                                                                                          Our experience clearly differ then. I worked for companies that made native apps for the major platforms (mobile included), and each team was 10+ people at a minimum, with little code shared (the common code was behind an API, so there’s that, but the apps itself had virtually no code in common). Not to mention that the UIs differed a lot, because they were made to feel native. Different UIs, different designs, different bugs, different things to document and support. A whole lot of time was wasted on bridging the gaps.

                                                                                                                          If they made the apps feel less native, and have a common look across platforms, then indeed, it could have been done with fewer people. You’d have to fight the native widgets then, though. Or use a cross-platform widget library. Or write your own. And the writer of the article would then complain loudly, and proclaim that multi-platform apps are the worst that could happen to the Mac (paraphrasing), because they don’t follow the HIG, and developers nowadays just don’t care.

                                                                                                                          1. 3

                                                                                                                            each team was 10+ people at a minimum, with little code shared (the common code was behind an API, so there’s that, but the apps itself had virtually no code in common).

                                                                                                                            “If they made the apps feel less native, and have a common look across platforms, then indeed, it could have been done with fewer people. “

                                                                                                                            I said native look on multiple platforms minimizing cost. You example sounds like something about the company rather than an inherent property of cross-platform. You usually need at least one person per platform, esp UI and style experts, but most of the code can be reused. The UI’s just call into it. They’ll have some of their own code, too, for functions specific to that platform. Mostly portable. Just sounds like the company didn’t want to do it that way, didn’t know how, or maybe couldn’t due to constraints from legacy decisions.

                                                                                                                            1. 1

                                                                                                                              My experience mirrors yours almost exactly.

                                                                                                                      2. 3

                                                                                                                        What about sciter? Companies doing stuff like anti-virus have been using it for a long time with way, way, less, resource use. It’s licensing scheme looks like something other vendors should copy. Here’s a comparison claiming a simple editor is 2MB in sciter vs 100+ in Electron just because it brings in less baggage.

                                                                                                                        How many people using Electron could use the free, binary version of sciter? And how many companies using Electron could afford $310 per year? I mean, that’s within reach of startups and micro-businesses, yeah? I’m asking because you said it’s use Electron or not cross-platform at all cuz too costly/difficult. I’m making it easy by using a comparable offering rather than, say, Lazarus w/ Free Pascal or otherwise non-mainstream languages. :)

                                                                                                                        Note: I also have a feeling lots of people just don’t know about this product, too.

                                                                                                                        1. 4

                                                                                                                          I actually evaluated sciter when I got frustrated with developing in JS, and wanted to go native. For my use, it wasn’t an option, because the app I write is free software, and therefore so much be its dependencies. For closed-source use, it’s probably fine. Though, you’d still have to write plenty of platform-specific code. AV companies already do that, but companies that start off with a web-based service, and develop an application later do not have that platform-specific code already built. What they have, is plenty of JavaScript and web stuff. Putting that into Electron is considerably easier, and you end up with very similar UX, with little effort, because you’re targeting a browser still.

                                                                                                                          Oh, and:

                                                                                                                          With Sciter, changing the front end of your application involves just altering the styles (CSS), and probably a couple of scripts that do animations.

                                                                                                                          Yeaah… no. That’s not how changing the UX works. It’s a tad more involved than that. Not sure I’d be willing to trust my UI on an offering that basically tells me that changing between, say, Metro and Material UI is a matter of CSS and animations. (Hint: it’s much more involved than that.)

                                                                                                                          Since we’ve already decided we’re not going to follow any platform HIGs (and thus, have the article author proclaim we’re a scurge), what’s the point of going native? Less resource use? Why is that worth it for the company? People use the app as it is, otherwise they wouldn’t be writing an app. Writing native, and making it look and behave the same has non-negligible cost, but little to no benefits. Faster and lighter is in many cases not a goal worth pursuing, because customers buy the thing anyway. (I’d love if that wouldn’t be the case, but in many cases, it is.)

                                                                                                                          1. 1

                                                                                                                            I appreciate the balanced review of sciter. That makes sense.

                                                                                                                      1. 9

                                                                                                                        I keep a VM with the latest release of Haiku on it because someday, it will be mature enough for me to use as my daily driver. IT WILL. SHUT UP. IT TOTALLY WILL.

                                                                                                                        //It’s so cool.

                                                                                                                        1. 1

                                                                                                                          It’s possible, but you should definitely write something up if you do.

                                                                                                                        1. 8

                                                                                                                          the problem with css as a language is that it does not make conceptually simple things simple. particularly when it comes to the sort of neat, grid-based layout that app developers are used to from using frameworks like gtk, qt, windows forms, etc.

                                                                                                                          if i can sketch a layout in a few minutes on a piece of paper, it should not take days of fighting with css to get it to work right, even if i (admittedly) don’t have much experience in front end development; there is no other UI library, toolkit or framework that i found so hard to get to grips with.

                                                                                                                          1. 12

                                                                                                                            I’d argue that when using windows forms, qt, wpf, etc I find myself stifled by lack of freedom whenever I try to make an interface of any sort of complexity. Usually it takes a few lines of CSS (especially with modern CSS features like grid and flex) to get most layouts set up, where in native application frameworks, doing things like trying to style all buttons the same color are frustratingly difficult. In some things like windows forms, it’s practically impossible to style components as a group instead of individually.

                                                                                                                            I don’t think this is a problem with CSS as it exists today, though the argument may have been valid 10 years ago.

                                                                                                                            1. 10

                                                                                                                              I think this arises from a fundamental difference between UI philosophies.

                                                                                                                              In web applications, your design should represent your unique brand. Buttons shouldn’t be buttons, but should be your buttons.

                                                                                                                              In client apps, consistency is key, so all buttons should be the OS-standard buttons.

                                                                                                                              The problem becomes, who is in control of their experience? My current desktop environment is light text on a dark background. Any website I load is apparently free to override this. I have to leverage browser extensions to trick websites into looking the way I want them to look. Worse- because my desktop foreground color is light, and my desktop background color is dark, I’m playing a game of roulette with websites. Some of them change the foreground text color, others change the background color, but many don’t change both, which means I’m left with dark-on-dark or light-on-light text.

                                                                                                                              The fundamental unit of control should be the end user, not the designer. I should decide what a button looks like, and if I delegate that responsibility to another piece of software, it should be my OS/DE first. The worst thing about the web is the idea that each website needs to look their own way. Fuck your website. Buttons should always look like buttons. Text fields should always look like text fields. The font I set as my default font should be the only font that displays text for readability (feel free to use bullshit fonts for shit I don’t care about, like logos or ads, bullshit deserves bullshit).

                                                                                                                              Users should be the owners of their experience. Always and forever. And yes, I do override your CSS. All the time.

                                                                                                                              1. 2

                                                                                                                                You mean with firvor that the content should be bundled rather than the content + the style + the platform ?

                                                                                                                                I agree that extracting the actual content out of the website is getting challenging…

                                                                                                                                No, I do not want to subscribe, just the text. No I wish to not send you cookies tokens. Oh, sub-sub menu to disable them, let’s do them all. Now what ? Ad blocking does not work ? Oh right, self promotion from publisher… And I can’t read with these 3em quotes all over the article.

                                                                                                                                sight Do you serve the article as FTP or Gopher ? Hehe, no of course.

                                                                                                                                Ah text browser does not even load the page, content loads through JavaScript.

                                                                                                                                Ok, let’s stick to the website.

                                                                                                                                1. 1

                                                                                                                                  And on publishing side: I may have lost 500~1000 words texts due to the session I was logged in timed out, this about a 50 times.

                                                                                                                              2. 4

                                                                                                                                Let’s not forget the stack that lies below: it just takes a few lines of CSS and a zillion of lines of code in the web browser. Chromium has more lines of code than FreeBSD, OpenBSD, DragonflyBSD and NetBSD altogether for instance.

                                                                                                                              3. 6

                                                                                                                                Yeah the native app toolkits rend to work on a “place things on a grid” strategy, whereas CSS is much more about “flowing the content according to the viewport”

                                                                                                                                It was definitely the right choice in the end given smartphones becoming people’s main browsing devices, but it’s unlike what a lot o programmers were used to.

                                                                                                                                I think once you think about the box model deeply, and stop trying to place things at certain parts of the screen, you can reach acceptance of the system more quickly.

                                                                                                                                This post on CSS positioning was a huge eye opener for me in this regard. Stuff goes where it goes

                                                                                                                                1. 5

                                                                                                                                  Hmm often when conceptually simple things aren’t simple it may be because it’s not as simple as it appears. I use Bulma.io for css, and it’s pretty light as far as frameworks go, and gives me some sane defaults, but if I were actually a front end developer I can see where I would use less and less of the provided classes.

                                                                                                                                  1. 3

                                                                                                                                    So, depending on what level of browser support you’re willing to work with, CSS grid is rather good at doing that sort of grid-based layout you speak of. At least at the basic levels where I have experience with it.

                                                                                                                                    If you’ve never used it, it is very worth a look

                                                                                                                                    1. 1

                                                                                                                                      thanks, i’ll give it a try. this is what i ended up with using flex (which i thought was the way to go, but was way fiddlier than i expected)

                                                                                                                                      1. 2

                                                                                                                                        Flex is good for when you want things to flow in one dimension (with the possibility of wrapping to “fake” a second dimension) but don’t care about the exact sizes. For instance, a bunch of text blobs that you want to sit next to each other and spread out to take up the available space.

                                                                                                                                        For actual grid stuff though, you should use CSS grid, which allows you to place elements in a 2 dimensional grid and give them exact sizes.

                                                                                                                                        Both features have their uses cases, but neither is a complete solution in and of themselves.

                                                                                                                                        Here are some good articles on the use cases for each:

                                                                                                                                        Also here’s a comprehensive guide to using CSS Grid: https://css-tricks.com/snippets/css/complete-guide-grid/

                                                                                                                                    2. 2

                                                                                                                                      Personally I’d take CSS over WPF or Winforms any day. CSS certainly has its annoyances, but I found it far less annoying to work with for anything nontrivial.

                                                                                                                                    1. 27

                                                                                                                                      I think people talking about inspecting the source before installing dependencies are being unreasonable to some degree.

                                                                                                                                      1. The malicious code was present only in the minified version of the code. I suppose the red flag that tipped the reporter was the lack of history/popularity of the repository in question, but it doesn’t have to be like that
                                                                                                                                      2. It can be released to npm in a way that’s not evident to casually browsing the github repo
                                                                                                                                      3. There isn’t even any guarantee that the code on npm matches what’s on github at all

                                                                                                                                      Meaning the ways to be safe are:

                                                                                                                                      1. Hand-inspect the code in your node_modules directory (including — or especially— those that may be minified); or
                                                                                                                                      2. Don’t use npm at all.

                                                                                                                                      I don’t see these people (nor myself) doing either. From which it follows:

                                                                                                                                      Any company desiring to buy into the so-called “modern” front end development (be it for productivity, performance or hiring purposes) does so by making itself vulnerable to attacks such as this.

                                                                                                                                      I don’t know if that’s a reasonable price to pay to use, say, React, but it sure isn’t reasonable to me to pay that to use Node (versus, say, Golang, which can reasonably be used to build the same kinds of apps using little more than the standard library).

                                                                                                                                      1. 21

                                                                                                                                        The malicious code was present only in the minified version of the code. I suppose the red flag that tipped the reporter was the lack of history/popularity of the repository in question, but it doesn’t have to be like that

                                                                                                                                        One more reason for reproducible builds… minified JS should be treated like compiled code and automated mechanisms should check if it matches the unminified version…

                                                                                                                                        1. 6

                                                                                                                                          This, a thousand times this. I can’t comprehend the reasoning that goes into committing derived code into source control. It’s a pain to remember to update it every time you commit, it’s hard to verify the code matches the original source and just pollutes the history. Diffing is mostly undoable too.

                                                                                                                                          1. 3

                                                                                                                                            I think the reasoning is to avoid build dependency. For some time, it was a usual practice to include Autoconf-derived configure script in release artifacts, so that users can avoid installing Autoconf.

                                                                                                                                            1. 1

                                                                                                                                              Yeah, that’s annoying too (and a lot of projects still do it even though it’s not really good practice), but at least configure scripts don’t tend to/need to change with every single code change like these minified files do.

                                                                                                                                              1. 1

                                                                                                                                                generated autoconf configure scripts are pretty easy to read, I can say there were times I preferred them over the m4 source.

                                                                                                                                          2. 11

                                                                                                                                            It would be really nice if the package repositories (npm/pypi/rubygems/etc) did something:

                                                                                                                                            • try to automatically detect obfuscated code
                                                                                                                                            • stop letting maintainers upload packages from their dev machines, make sure any compilation happens on a public CI environment from a known git tag (this would also encourage non-compiled packages, i.e. just direct snapshots of git tags)
                                                                                                                                            • have some popularity threshold for packages beyond which manual review from a trusted group of reviewers is required for each new release
                                                                                                                                            • (also why not require the git tags to be gpg signed for these popular packages)
                                                                                                                                            • maybe rethink the whole package handover thing, maybe only allowing “deprecation in favor of [a fork]” (i.e. requiring every update to be manual) is good
                                                                                                                                            1. 3

                                                                                                                                              I wouldn’t even check the node_modules output either as package installation can execute arbitrary code (node-gyp, other node bindings producing code)

                                                                                                                                              1. 4

                                                                                                                                                I agree with you!

                                                                                                                                                People seems to like to hit on npm, but I don’t see how the core issue is different than say Pypi, Cargo or Go (Other than the issues you raised). I personnaly take easy and simple dependancies management over C/C++ fragmented package management because most of my project are not security critical anyway or my threat model doesn’t include targeted code injection in my stack.

                                                                                                                                                I find it annoying when people look at those issues and some fault is put on the maintainers. Maybe the issue is not that one of your application’s thousands of dependancies compromition, but the fact that your risk management for your wallet application relies on thousands of unvetted dependancies…

                                                                                                                                                Meaning the ways to be safe are:

                                                                                                                                                I guess a first start would be to gather a bunch of useful and common repositories and ensure they and all their dependancies are well vetted and signed by the maintainers for each release and prevent any new dependancies from being pulled in without proper review and ensuring those dependancies use the same process. Documenting and enforcing such process for a subset of widely used dependancies would allow to trust a few organization and avoid to code review any dependancies I pull in in my own project. I guess most distribution core repositories has similar process like Arch maintained packages vs AUR.

                                                                                                                                                1. 8

                                                                                                                                                  Pypi absolutely has the the same potential issues, though in practice I think the dependency trees for popular projects are way smaller than what you get in the node ecosystem. So you’re much less likely to be hit by a transitive vulnerability. To me this is one of the advantages of a fairly comprehensive standard library, and a relatively small number (compared to node, at least) of popular, high quality third-party libraries that get a lot of eyeballs.

                                                                                                                                                  1. 11

                                                                                                                                                    On top of that, a lot of Python code is deployed to production by system engineers. Often it’s vetted, built, tested and baked in by distributions - and the same is true for other non-web languages.

                                                                                                                                                    javascript, on the other hand, is more often deployed by the upstream developer and thrown at web browsers straight away without any 3rd party review.

                                                                                                                                                    1. 3

                                                                                                                                                      Definitely! But that somehow happened to be this way. It would be nice to look at the social side as to why Python ended up this way while nothing prevented it from ending up like NPM. Maybe some key aspect of the tooling drive the trend one way or the other or it might be just the community (Python being much older and the tooling has seen a lot of changes over the years).

                                                                                                                                                      I would be looking forward to a someone doing a graph analysis of a few package repositories across languages and find some way to rate them and put some risk on packages. How many and how deep does their dependancies go? How many of them are maintained by external maintainer? Sounds like I found myself a new week-end project…

                                                                                                                                                      1. 12

                                                                                                                                                        Python has a decent class library. Good libraries that have general use migrate back into that class library, in some fashion or another. Thus, third party libraries don’t have to have long dependency chains to do anything.

                                                                                                                                                        What NPM forgot was that this was the fundamental idea that made package management useful. This stretches back to the early days of Java, at least, and I’m sure you can find other similar examples. By having a rich class library which already provides most of what you need, you’re simply going to layer on dependencies to adapt that framework to your specific business needs. Java, .NET, Ruby, Python- they all have that going for them. JavaScript simply does not. So half the Internet unwittingly depends on leftpad because a dependency of a dependency of a dependency needed it, and there wasn’t anything in the core library which could do it.

                                                                                                                                                        1. 1

                                                                                                                                                          Maybe some key aspect of the tooling drive the trend one way or the other or it might be just the community (Python being much older and the tooling has seen a lot of changes over the years).

                                                                                                                                                          I think this is a big part of it — Python’s tooling is generally less capable than the Node ecosystem’s.

                                                                                                                                                          To this day Pip doesn’t have a dependency resolver, so the result of installing a dependency tree with conflicts at the transitive dependency level isn’t an error, but an arbitrary version getting installed. You can only have a single version of a Python module installed, too, because they are global state. Contrast with how npm has historically (still does?) install multiple versions of a package, effectively vendoring each dependency’s dependency tree, transitively.

                                                                                                                                                          Additionally, publishing Python packages has long been messy and fraught. Today there is decent documentation but before that you were forced to rely on rumors, hearsay, and Stack Overflow. Putting anything nontrivial on PyPI (e.g., a C extension module) is asking for a long tail of support requests as it fails to install in odd environments.

                                                                                                                                                          I think the end result was a culture that values larger distributions to amortize packaging overhead. For example, the popular Django web framework long had a no-dependencies policy (if dependencies were required, they were vendored — e.g. simplejson before it entered the standard library).

                                                                                                                                                          Regardless of the reasons for it, I think that this is healthier than the Node culture of tiny dependencies with single contributors. More goes into distributing software than just coding and testing — documentation, support, evangelism, and legal legwork are all important — but tiny libraries have such limited scope that they’ll never grow a social ecosystem which can persist in the long term (of course, even Django has trouble with that).

                                                                                                                                                          1. 1

                                                                                                                                                            You can only have a single version of a Python module installed, too, because they are global state.

                                                                                                                                                            That’s actually a pretty good point I think. I have fought myself a few time against Pip due to conflicting versions. It does benefits library with fewer dependancies.

                                                                                                                                                      2. 1

                                                                                                                                                        While I’m not generally a fan of it, I think minimal version selection that’s planned for the future go package manager would make this attack spread much more slowly.

                                                                                                                                                    1. 8

                                                                                                                                                      The browser is the new operating system. An operating system where code is automatically downloaded and run.

                                                                                                                                                      Our desktop OS’s are the old operating systems. Code must be manually downloaded, often then manually installed, and then run. Package managers make this a little easier, but still require lots of intentional and consensual steps

                                                                                                                                                      If we ignore all of the security and trust issues of this new model of OS, it’s interesting to think about and compare. It’s winning because it’s what users want: less barrier to entry, less effort.

                                                                                                                                                      Is this barrier useful to have for other (eg social) reasons too? Would it make people think more if they had to download and install services such as Facebook, slack, youtube, github, etc before they used them?

                                                                                                                                                      1. 11

                                                                                                                                                        The irony is that the “old” way of doing it is difficult in large part because it’s such a bad idea to automatically download and run applications. It’s not like it can’t be done - ActiveX was basically a native application downloaded and run automatically inside a browser, but it was so unsafe Microsoft eventually killed it.

                                                                                                                                                        1. 5

                                                                                                                                                          Circa 1998, I went out and wrote a web-based version of Notepad using ActiveX and VBScript, just to prove how terrifying ActiveX should be to people. I had to build it to social engineer its way past the “hey, are you sure you want to do this?” confirmation box, but that’s easy- I popped up a series of Alert/Confirm boxes before loading the ActiveX component. Click fatigue meant everybody just okayed their way through.

                                                                                                                                                          This was the 90s, though. I’m sure users of today are way more savvy. /s

                                                                                                                                                          (I mean, honestly, Facebook has to have warnings that pop up if you bring the dev tools up because people were social engineered into copy/pasting code into the debugger!)

                                                                                                                                                          1. 1

                                                                                                                                                            Wow. Used win32 apis and everything to draw the widgets?

                                                                                                                                                            1. 1

                                                                                                                                                              It was using ActiveX to draw the widgets, so under the hood, yeah, there were some Win32 API calls but my app wasn’t making them directly. ActiveX was crazy, and symptomatic of Microsoft’s approach to everything at the time: ours does more and thus is better and oh, also, completely proprietary to our ecosystem. Don’t use Netscape/Mozilla. Use IE, because it has ActiveX which is WAAAAAAAY more powerful than JavaScript!

                                                                                                                                                              1. 2

                                                                                                                                                                Absolutely beautiful.

                                                                                                                                                                The forest of abandoned projects and legacy DLLs in Windows is a bit like a gameworld. So much still left around in ways that’s not useful or reliable to most but still lot’s of fun to play with. “Paint the walls with executable code” they must have shouted.

                                                                                                                                                                Case in point: Windows isn’t just the world, it’s also the sun and the moon. Try this one in your Run dialog (if you trust me :P):

                                                                                                                                                                rundll32 loghours.dll ConnectionScheduleDialog

                                                                                                                                                                A very unique interface for (what I think is) scheduling modem dialing hours. This dialog oozes with intrigue, and the versions in Win7 and later even have updated icons. I wonder if there are people in MS who maintain short lists of some of these DLLs. Probably not, the icons might be incidental.

                                                                                                                                                                Not as obscure, but an intriguing old and forgotten feature:

                                                                                                                                                                rundll32 infocardapi.dll ManageCardSpace

                                                                                                                                                                I’m glad it still steals your whole screen. How important this project must have been! No other work you were doing at the time could compare.

                                                                                                                                                                I wonder what the fallout of this feature being forgotten was. Someone, somewhere still has a congratulatory custom deck of playing cards tucked away.

                                                                                                                                                                I’ve been brute forcing my way through Windows’ DLL collection to find these (I have a long list of more). My poor VM has sustained quite a bit of permanent damage from calling every exported function in every DLL, but I’m still going. I think I’m up to ‘W’.

                                                                                                                                                                Next I want to try and find a way to call the non-rundll32 compatible functions in these DLLs. A proper compiled language (C or AutoIt) is a copout, I want to find a nice hacky way of doing full calls from batch, vbs or similar. Then I’d be able to make full win32 GUIs from nothing but a portable script file and my life would be complete.

                                                                                                                                                          2. 3

                                                                                                                                                            Yes, but people want to actually do stuff with their computers. If the only way to do it is to manually download and install a win32 app, that’s what’ll happen. If the only way to do it is to click through the JavaScript-is-dangerous warnings, that’s what’ll happen.

                                                                                                                                                            The new macOS sandbox is baked into the Open File dialog; the app can’t access a file unless it goes through the OS-provided file picker, and the proposed web API is going to work the same way (though, obviously, there’s still arguments about replacing bashrc and such). As the Mill people put it: security needs to be ubiquitous, unavoidable, and efficient, or it won’t get used.

                                                                                                                                                            1. 3

                                                                                                                                                              I don’t see what you’re getting at.

                                                                                                                                                              To take my previous comment a step further, it seems moving as many apps as possible into a web browser hasn’t achieved much except setting us back ~20 years in terms of performance and usability. At best we just added a bunch of new problems (malware, tracking, advertising) to the ones we already had.

                                                                                                                                                              Just one example: compare Office 98 with the latest version of Office 365, and Office 98 is better in almost every way - more features, better UI, better keyboard shortcuts, better performance.

                                                                                                                                                              Office 365’s only win is that it’s cross-platform, but that’s a marginal advantage because there was a Macintosh version of Office 98 and it ran in Wine.

                                                                                                                                                              1. 2

                                                                                                                                                                I’m arguing that requiring users to manually install apps, instead of automatically downloading and running them, is not an effective security barrier. If clicking through a warning is perceived as necessary to get to the desired end-state, then the warning will be clicked through; malware authors will happily guide someone through setting the browser or operating system to developer mode if that’s what’s needed.

                                                                                                                                                                1. 2

                                                                                                                                                                  On that I agree with you. Poorly educated users are a big security vulnerability.

                                                                                                                                                                2. 2

                                                                                                                                                                  Exactly. Most stuff on my Windows 98 box was better than most web stuff today. It ran on a Pentium 2 with 64MB of RAM. I dont expect that efficiency with today’s higher requirements. I do expect a better developer and user experience with resource use commensurate with what that takes.

                                                                                                                                                            2. 8

                                                                                                                                                              Would it make people think more if they had to download and install services such as Facebook, slack, youtube, github, etc before they used them?

                                                                                                                                                              For many of those things they still have to on smart phones. They are even told what the app will have permission to access. And they still blindly allow whatever it is.

                                                                                                                                                              1. 4

                                                                                                                                                                The high-assurance, security community has stayed working on securing browsers. They mostly just isolated them on separation kernels in dedicated VM’s. However, a few teams applied similar principles to secure, browser architectures. At least one, IBOS, was both an OS and browser. I had a list in this comment.

                                                                                                                                                                1. 1

                                                                                                                                                                  Would it make people think more if they had to download and install services such as Facebook, slack, youtube, github, etc before they used them?

                                                                                                                                                                  I wouldn’t download them because I don’t trust them with my data.

                                                                                                                                                                1. 21

                                                                                                                                                                  I used to work in academia, and this is an argument that I had many times. “Teaching programming” is really about teaching symbolic logic and algorithmic thinking, and any number of languages can do that without the baggage and complexity of C++. I think, if I was in a similar position again, I’d probably argue for Scheme and use The Little Schemer as the class text.

                                                                                                                                                                  1. 10

                                                                                                                                                                    This is called computational thinking. I’ve found the topic to be contentious in universities, where many people are exposed to programming for the first time. Idealists will want to focus on intangible, fundamental skills with languages that have a simple core, like scheme, while pragmatists will want to give students more marketable skills (e.g. python/java/matlab modeling). Students also get frustrated (understandably) at learning “some niche language” instead of the languages requested on job postings.

                                                                                                                                                                    Regardless, I think we can all agree C++ is indeed a terrible first language to learn.

                                                                                                                                                                    1. 9

                                                                                                                                                                      Ironically, if you’d asked me ten years ago I would’ve said Python. I suppose I’ve become more idealist over time: I think those intangible, fundamental skills are the necessary ingredients for a successful programmer. I’ve worked with a lot of people who “knew Python” but couldn’t think their way through a problem at all; I’ve had to whiteboard for someone why their contradictory boolean condition would never work. Logic and algorithms matter a lot.

                                                                                                                                                                      1. 9

                                                                                                                                                                        I think python is a nice compromise. The syntax and semantics are simple enough that you can focus on the fundamentals, and at the same time it gives a base for students to explore more practical aspects of they want.

                                                                                                                                                                      2. 7

                                                                                                                                                                        Students also get frustrated (understandably) at learning “some niche language” instead of the languages requested on job postings.

                                                                                                                                                                        Yeah, I feel like universities could do a better job at setting the stage for this stuff. They should explain why the “niche language” is being used, and help the students understand that this will give them a long term competitive advantage over people who have just been chasing the latest fads based on the whims of industry.

                                                                                                                                                                        Then there is also the additional problem of industry pressuring universities into becoming job training institutions, rather than places for fostering far-looking, independent thinkers, with a deep understanding of theory and history. :/

                                                                                                                                                                        1. 3

                                                                                                                                                                          I’ve been thinking about this a bit lately, because I’m teaching an intro programming languages course in Spring ‘19 (not intro to programming, but a 2nd year course that’s supposed to survey programming paradigms and fundamental concepts). I have some scope to revise the curriculum, and want to balance giving a survey of what I think of as fundamentals with picking specific languages to do assignments in that students will perceive as relevant, and ideally can even put on their resumes as something they have intro-level experience in.

                                                                                                                                                                          I think it might be getting easier than it has been in a while to square this circle though. For some language families at least, you can find a flavor that has some kind of modern relevance that students & employers will respect. Clojure is more mainstream than any Lisp has been in decades, for example. I may personally prefer CL or Scheme, but most of what I’d teach in those I can teach in Clojure. Or another one: I took a course that used SML in the early 2000s, and liked it, but it was very much not an “industry” language at the time. Nowadays ReasonML is from Facebook, so is hard to dismiss as purely ivory tower, and OCaml on a resume is something that increasingly gets respect. Even for things that haven’t quite been picked up in industry, there are modernish communities around some, e.g. Factor is an up-to-date take on stack languages.

                                                                                                                                                                          1. 3

                                                                                                                                                                            I one way you can look at it is: understanding how to analyse the syntax and semantics of programming languages can help you a great deal when learning new languages, and even in learning new frameworks (Rails, RSpec, Ember, React, NumPy, Regex, Query builders, etc. could all be seen as domain specific PLs embedded in a host language). Often they have weird behaviours, but it really helps to have a mental framework to quickly understand new language concepts.

                                                                                                                                                                            Note that I wouldn’t recommend this as a beginner programming language course - indeed I’d probably go with TypeScript, because if all else fails they’ll have learned something that can work in many places, and sets them on the path of using types early on. From the teaching languages Pyret looks good too, but you’d have to prevent it from being rejected. But as soon as possible I think it’s important to get them onto something like Coursera’s Programming Languages course (which goes from SML -> Racket -> Ruby, and shows them how to pick up new languages quickly).

                                                                                                                                                                        2. 7

                                                                                                                                                                          I started college in 1998, and our intro CS class was in Scheme. At the time, I already had done BASIC, Pascal, and C++, and was (over)confident in all of them, and I hated doing Scheme. It was different, it was impractical, I saw no use in learning it. By my sophomore year I was telling everyone who would listen that we should just do intro in Perl, because you can do useful things in it!

                                                                                                                                                                          Boy howdy, was I wrong, and not just about Perl. I didn’t appreciate it at the time, and I didn’t actually appreciate it until years later. It just sorta percolated up as, “Holy crap, this stuff is in my brain and it’s useful.”

                                                                                                                                                                          1. 3

                                                                                                                                                                            I hear this reasoning, about teaching tangible skills, but even one two or three quarters for Python is not enough for a job, at least it shouldn’t be. If it is, then employers are totally ok with extremely shallow knowledge.

                                                                                                                                                                            1. 1

                                                                                                                                                                              I didn’t even realize I had read this a month ago, nevermind I had commented on it, before I wrote my own post on the topic. Subconscious motivations at its finest.

                                                                                                                                                                          1. 16

                                                                                                                                                                            90s C code has plenty of weird properties, but the strangest is an absolute refusal to use proper indentation combined with a massive lack of visual whitespace.

                                                                                                                                                                            I do not believe that is actually an essential property of 90s C code.

                                                                                                                                                                            1. 2

                                                                                                                                                                              And more to the point, I can see pretty clear reasons within the excerpt as to why it’s formatted that way. I probably wouldn’t have written it that way, but it’s not nearly as terrible as the author is trying to make it sound.

                                                                                                                                                                              Yes, the lack of indentation under the while loop makes it harder to see what’s contained in the loop. But digging up the actual source, it’s pretty clear that the method in question is basically a little “pre-work”, a “main loop”, and a little “post-work”. The version I linked actually indents the loop body, but it’s not necessary indentation, even for readability.

                                                                                                                                                                              I’m a C incompetent, and this code doesn’t particularly bother me.

                                                                                                                                                                              Edit: the second excerpt is honestly annoying, as it indents things more than it should, which is the opposite of the first block, which didn’t indent very much at all.

                                                                                                                                                                            1. 3

                                                                                                                                                                              No, it’s vector multiplication in disguise as a markov chain.

                                                                                                                                                                              1. 3

                                                                                                                                                                                That seems like a category mistake the me, whereas the title of the article doesn’t.

                                                                                                                                                                                A Markov chain may be a specific pattern of vector multiplications, but that pattern makes all the difference. Markov chains and vector multiplications are on a different level. On the other hand ‘deep learning’ and ‘Markov chain’ are terms for alternative patterns of vector multiplications, one a lot more involved than the other.

                                                                                                                                                                                1. 2

                                                                                                                                                                                  There’s a video on YT somewhere of a talk by a physicist (IIRC) on why deep learning is so ridiculously effective - it pretty much boils down to the same reason that mathematics is so unreasonably effective in describing physical systems in general, i.e. (handwaving extremely wildly from memory) that physical systems tend to be simple functions of their inputs (albeit with many, many inputs!) where causality is preserved. This is what makes it possible for RNNs and the like to approximate physical systems in various ways, because the nature of said physical systems is exactly what permits approximations of the information content of the system to be at least partially valid instead of being a total loss.

                                                                                                                                                                                  (I tried to find the video, but there are too many terrible ones on the same topic these days. I’ll have another look later.)

                                                                                                                                                                                2. 4

                                                                                                                                                                                  No, it’s a monoid in the category of endofunctors.

                                                                                                                                                                                1. 6

                                                                                                                                                                                  Government jobs tend to be 40 hours or less. State government in my state has a 37.5 hour standard. There is very occasional off-hours work, but overtime is never required except during emergencies – and not “business emergencies”, but, like, natural disasters.

                                                                                                                                                                                  1. 8

                                                                                                                                                                                    I’m surprised that tech workers turn up their nose at government jobs. Sure, they pay less, but the benefits are amazing! And they really don’t pay too much less in the scheme of things.

                                                                                                                                                                                    How many private sector tech jobs have pensions? I bet not many.

                                                                                                                                                                                    1. 9

                                                                                                                                                                                      I work in a city where 90% of the folks showing up to the local developer meetup are employed by the city or the state.

                                                                                                                                                                                      It’s taken a lot of getting used to being the only person in the room who doesn’t run Windows.

                                                                                                                                                                                      1. 4

                                                                                                                                                                                        I feel like this is pretty much the same for me (aside from the meetup bit).

                                                                                                                                                                                        Have you ever worked with windows or have you been able to stay away from it professionally?

                                                                                                                                                                                        1. 3

                                                                                                                                                                                          I used it on and off for a class for about a year in 2003 at university but have been able to avoid it other than that.

                                                                                                                                                                                        2. 1

                                                                                                                                                                                          Yeah. I hadn’t used Windows since Win 3.1, until I started working for the state (in the Win XP era). I still don’t use it at home, but all my dayjob work is on Windows, and C#.

                                                                                                                                                                                        3. 5

                                                                                                                                                                                          they pay less

                                                                                                                                                                                          Not sure about this one. When you speak about pay, you also have to count all the advantages going with it. In addition, they usually push you out at 5pm so your hourly rate is very close to the contractual one.

                                                                                                                                                                                          1. 3

                                                                                                                                                                                            Most people who are complaining that they pay less are the tech workers who hustle hard in Silicon Valley or at one of the big N companies. While government jobs can pay really well and have excellent value especially when considered pay/hours and benefits like pensions, a Google employee’s ceiling is going to be way higher.

                                                                                                                                                                                            There’s a subreddit where software engineers share their salaries and it seems like big N companies can pay anything from $300k–700k USD when you consider their total package. No government job is going to match that.

                                                                                                                                                                                          2. 3

                                                                                                                                                                                            Do you work in the public sector? What’s it like?

                                                                                                                                                                                            1. 13

                                                                                                                                                                                              I do.

                                                                                                                                                                                              Pros: hours, and benefits. Less trend-driven development and red queen effect. Less age discrimination (probably more diversity in general, at least compared to Silicon Valley).

                                                                                                                                                                                              Cons: low pay, hard to hire and retain qualified people. Bureaucracy can be galling, but I imagine that’s true in large private sector organizations, too.

                                                                                                                                                                                              We’re not that behind the times here; we’ve avoided some dead-ends by being just far enough behind the curve to see stuff fail before we can adopt it.

                                                                                                                                                                                              Also, depending on how well your agency’s goals align with your values, Don’t Be Evil can actually be realistic.

                                                                                                                                                                                              1. 6

                                                                                                                                                                                                I will say, I once did a contract with the Virginia DOT during Peak Teaparty. Never before in my life have I seen a more downtrodden group. Every single person I talked to was there because they really believed in their work, and every single one of them was burdened by the reality that their organization didn’t and was cutting funding, cutting staff, and cutting… everything.

                                                                                                                                                                                                They were some of the best individuals I ever worked with, but within the worst organization I’ve ever interacted with.

                                                                                                                                                                                                Contrast that to New York State- I did a shitton of work for a few departments there. These were just folks who showed up to get things done. They were paid well, respected, and accomplished what they could within the confines of their organization. They also were up for letting work knock off at 2PM.

                                                                                                                                                                                                1. 2

                                                                                                                                                                                                  Also, depending on how well your agency’s goals align with your values, Don’t Be Evil can actually be realistic.

                                                                                                                                                                                                  Agreed. There’s no such thing as an ethical corporation.

                                                                                                                                                                                                  Do you mind sharing the minimum qualifications of a candidate at your institution? How necessary is a degree?

                                                                                                                                                                                                  I’m asking for a friend 😏

                                                                                                                                                                                                  1. 2

                                                                                                                                                                                                    What about B corps?

                                                                                                                                                                                                    1. 1

                                                                                                                                                                                                      No, not even them.

                                                                                                                                                                                                      When you think about what “profit” is (ie taking more than you give), I think it’s really hard to defend any for-profit organization. Somebody has to lose in the exchange. If it’s not the customers, it’s the employees.

                                                                                                                                                                                                      1. 5

                                                                                                                                                                                                        That’s a pretty cynical view of how trade works & not one I generally share. Except under situations of effective duress where one side has lopsided bargaining leverage over the other (e.g. monopolies, workers exploited because they have no better options), customers, employees and shareholders can all benefit. Sometimes this has negative externalities but not always.

                                                                                                                                                                                                        1. 1

                                                                                                                                                                                                          Then I guess we must agree to disagree 🤷🏻‍♂️

                                                                                                                                                                                                        2. 2

                                                                                                                                                                                                          Profit is revenue minus expenses. Your definition, taking more than you give, makes your conclusion a tautology. i.e., meaningless repetition.

                                                                                                                                                                                                          Reciprocity is a natural law: markets function because both parties benefit from the exchange. As a nod to adsouza’s point: fully-informed, warrantied, productive, voluntary exchange makes markets.

                                                                                                                                                                                                          Profit exists because you can organize against risk. Due to comparative advantage, you don’t even have to be better at it than your competitors. Voluntary exchange benefits both weaker and stronger parties.

                                                                                                                                                                                                          1. 1

                                                                                                                                                                                                            Profit is revenue minus expenses. Your definition, taking more than you give, makes your conclusion a tautology. i.e., meaningless repetition.

                                                                                                                                                                                                            I mean, yes, I was repeating myself. I wasn’t concluding anything: I was merely rephrasing “profit.” I’m not sure what you’re trying to get at here aside from fishing for a logical fallacy.

                                                                                                                                                                                                            a tautology. i.e., meaningless repetition.

                                                                                                                                                                                                            Intentionally meta?

                                                                                                                                                                                                            Reciprocity is a natural law

                                                                                                                                                                                                            Yup. No arguments here. However, reciprocity is not profit. In fact, that’s the very distinction I’m trying to make. Reciprocity is based on fairness and balance, that what you get should be equal to what you give. Profit is expecting to get back more than what you put in.

                                                                                                                                                                                                            Profit exists because you can organize against risk.

                                                                                                                                                                                                            Sure, but not all parties can profit simultaneously. There are winners and losers in the world of capitalism.

                                                                                                                                                                                                          2. 1

                                                                                                                                                                                                            So, if I watch you from afar and realize that you’ll be in trouble within seconds, come to your aid, and save your life (without much effort on my side) in exchange for $10, who’s the one losing in this interaction? Personally, I don’t think there’s anything morally wrong with playing positive-sum games and sharing the profits with the other parties.

                                                                                                                                                                                                        3. 1

                                                                                                                                                                                                          For an entry-level developer position, we want either a batchelor’s degree in an appropriate program, with no experience required, an associate’s degree and two years of experience, or no degree and four years of experience. The help-desk and technician positions probably require less for entry level but I’m not personally acquainted with their hiring process.

                                                                                                                                                                                                          1. 2

                                                                                                                                                                                                            I would fall into the last category. Kind of rough being in the industry for 5 years and having to take an entry level job because I don’t have a piece of paper, but that’s how it goes.

                                                                                                                                                                                                            1. 2

                                                                                                                                                                                                              For us, adding an AS (community college) to that 5 years of experience would probably get you into a level 2 position if your existing work is good. Don’t know how well that generalizes.

                                                                                                                                                                                                              1. 2

                                                                                                                                                                                                                Okay cool! I have about an AS in credits from a community college I’d just need to graduate officially. Though, at that point, I might as well get a BS.

                                                                                                                                                                                                                Thanks for helping me in my research :)

                                                                                                                                                                                                      2. 4

                                                                                                                                                                                                        I don’t, but I’m very envious of my family members who do.

                                                                                                                                                                                                        One time my cousin (works for the state’s Department of Forestry) replied to an email on Sunday and they told him to take 4 hours off Monday to balance it off.

                                                                                                                                                                                                        That said, from a technological perspective I’d imagine it would be quite behind in times, and moves very slowly. If you’re a diehard agile manifesto person (I’m not) I probably wouldn’t recommend it.

                                                                                                                                                                                                        EDIT: I guess it’s really what you value more. In the public sector, you get free time at the expense of money. In the private sector, vice versa. I can see someone who chases the latest technologies and loves to code all day long being miserable there, but for people who just code so they can live a fulfilling life outside of work it could be a good fit.