1. 1

    HardenedBSD and Jails. Works great! documentation is a little sparse, but everything just works. Probably better to start with FreeBSD. Configuration management is shell scripts, I’ve found everything else to be very brittle, with horrible debugging. Debugging Shell is a stable, well-known problem.

    1. 2

      This is worrying. Do I have to start looking at fastmail alternatives? Any recommendations?

      1. 2

        How do you think this will affect fastmail, specifically? They do not provide encrypted email, and already will comply with Australian government search orders.

        1. 2

          I’m not an expert, but I don’t see how it would change anything for Fastmail, they already give the govt. anything they ask for(that’s legal).

        2. 1

          I stumbled upon proton mail the other day. Not tried it though: https://en.wikipedia.org/wiki/ProtonMail

          If you like pain, you can host your own mail server.

          1. 1

            I stumbled upon proton mail the other day. Not tried it though: https://en.wikipedia.org/wiki/ProtonMail

            caveat emptor regarding protonmail.

            1. 1

              Who or what is Duke-Cohan?

              I’m a happy customer of ProtonMail, but I don’t like the pain of self-hosting unless it’s solely my own shit, and Google is a no-go zone.

              1. 1

                Oopsy!

            1. 2

              gosh, to do this on a BSD is essentially: clone the repo, change whatever you want and then type make world.

              1. 5

                We have our own custom payroll system, and I agree payroll is crazy. I find it’s a mixture of 2 things that make it hard:

                • People are very fuzzy and super hard to concretely define.
                • People care A LOT about money, which increases their fuzzy nature, to competing ends, the company wants to minimize payroll as much as possible, while employees want to maximize (their) payroll, so we have a bunch of fuzzy people fighting over literal pennies in a never-ending game.

                Not to mention all the laws from many different jurisdictions and payroll is ripe for insane complications.

                Whenever people say payroll should be “easy” I just laugh and laugh.

                1. 4

                  Was thinking about this. Many of those various complicated benefits that specific subgroups of people are entitled to are just something someone negotiated once. They’re worth a certain amount of money to that person and there’s some amount of money which you could pay them in return for them not having that benefit any more.

                  Has anyone seriously tried to do an ROI analysis of offering people more money to switch to less-complicated renumeration schemes… against the amount of money saved by making the payroll code some % simpler?

                  edit: obviously this doesn’t apply to most of the really complicated things like taxation, but some of these things they talk about like “these five specific people negotiated a remuneration scheme under which these slightly different rules apply to them and no one else” scenarios which are mentioned there.

                  1. 2

                    I think it’s interesting, but the people that fight over the $$’s are basically never the people that are very involved in having to implement said payroll system to pay those $$‘s. At least in my experience. So I’d be quite surprised if it was ever done at any large scale. That and by now, most payroll systems are sufficiently broad enough that most insanities perpetrated by companies can be implemented without new code… usually.

                    1. 1

                      That’s an interesting analysis in general, and one I don’t think I have thought of before. Not having feature X will cost us money, because reasons (X will be manual, won’t be done as fast, we’ll have to do Y which is costlier, etc), but adding X will increase code complexity a lot, resulting in added maintenance cost in the future. So, which is cheaper in the long run?

                      1. 3

                        I’m really hoping that someone else might have done that analysis and written about it somewhere because I would not even know where to begin with that myself. :)

                      2. 1

                        Has anyone seriously tried to do an ROI analysis of offering people more money to switch to less-complicated renumeration schemes… against the amount of money saved by making the payroll code some % simpler?

                        The incentives aren’t aligned.

                        Has anyone ever used a time reporting software that isn’t a huge pain to use? It’s because the people making those don’t sell them to the peons reporting times, they sell it to the managers looking at the results. Those people probably get great UI!

                        In similar ways, trying to get a union to accept simpler renumeration just to save the company some money reporting payroll and not having to pay expensive consultants - now you’ve pissed off the union and the consultants!

                      3. 3

                        The software I support at work generates schedules for people, so naturally customers want to use that data as a basis for payroll.

                        It’s a nightmare, especially since we were “nice” to customers in the beginning and hacked together custom solutions for their Payroll/HR systems without considering reusability etc. I once brought down an installation because I had to manually edit XML config files to add new holiday dates and forgot to save them in UTF-16…

                        Somewhere someone has dreams of the universal PayrollXML interchange format, but this is where the messy part of human relations (in the large) meets the part where programmers just want to have one interface - think how streamlined it will be!

                        This is why smart contracts will probably never work, either.

                      1. 1

                        don’t they have signatures? like you don’t even need gpg! https://www.openbsd.org/faq/faq4.html#Download

                        1. 4

                          They do have signatures, but they use their own tool called signify https://man.openbsd.org/signify. it’s right there on the page you linked :)

                        1. 2

                          Can someone ELI5 why Firefox is not to be trusted anymore?

                          1. 4

                            They’ve done some questionable things. They did this weird tie-in with Mr. Robot or some TV show, where they auto-installed a plugin(but disabled thankfully) to like everyone as part of an update. It wasn’t enabled by default if I remember right, but it got installed everywhere.

                            Their income stream, according to wikipedia: is funded by donations and “search royalties”. But really their entire revenue stream comes directly from Google. Also in 2012 they failed an IRS audit having to pay 1.5 million dollars. Hopefully they learned their lesson, time will tell.

                            They bought pocket and said it would be open sourced, but it’s been over a year now, and so far only the FF plugin is OSS.

                            1. 4

                              Some of this isn’t true.

                              1. Mr. Robot was like a promotion, but not a paid thing, like an ad. Someone thought this was a good idea and managed tto bypass code review. This won’t happen again.
                              2. Money comes from a variety of search providers, depending on locale. Money ggoes directly into the people, the engineers, the product. There are no stakeholders we need to make happy. No corporations we got to talk to. Search providers come to us to get our users.
                              3. Pocket. Still not everything, but much more than the add-on: https://github.com/Pocket?tab=repositories
                              1. 3
                                1. OK, fair enough, but I never used the word “ad”. Glad it won’t happen again.

                                2. When like 80 or 90% of their funding is directly from Google… It at the very least raises questions. So I wouldn’t say not true, perhaps I over-simplified, and fair enough.

                                3. YAY! Good to know. I hadn’t checked in a while, happy to be wrong here. Hopefully this will continue.

                                But overall thank you for elaborating. I was trying to keep it simple, but I don’t disagree with anything you said here. Also, I still use FF as my default browser. It’s the best of the options.

                              2. 4

                                But really their entire revenue stream comes directly from Google.

                                To put this part another way: the majority of their income comes from auctioning off being the default search bar target. That happens to be worth somewhere in the 100s of $millions to Google, but Microsoft also bid (as did other search engines in other parts of the world. IIRC the choice is localised) - Google just bid higher. There’s a meta-level criticism where Mozilla can’t afford to challenge /all/ the possible corporate bidders for that search placement, but they aren’t directly beholden to Google in the way the previous poster suggests.

                                1. 1

                                  Agreed. Except it’s well over half of their income, I think it’s up in the 80% or 90% range of how much of their funding comes from Google.

                                  1. 2

                                    And if they diversify and, say, sell out tiles on the new tab screen? Or integrate read-it-later services? That also doesn’t fly as recent history has shown.

                                    People ask from Mozilla to not sell ads, not take money for search engine integration, not partner with media properties and still keep up their investment into development of the platform.

                                    People don’t leave any explanation of how they can do that while also rejecting all their means of making money.

                                    1. 2

                                      Agreed. I assume this wasn’t an attack on me personally, and just as a comment of the sad state of FF’s diversification woes. They definitely need diversification. I don’t have any awesome suggestions here, except I think they need to diversify. Having all your income controlled by one source is almost always a terrible idea long-term.

                                      I don’t have problems, personally, with their selling of search integration, I have problems with Google essentially being their only income stream. I think it’s great they are trying to diversify, and I like that they do search integration by region/area, so at least it’s not 100% Google. I hope they continue testing the waters and finding new ways to diversify. I’m sure some will be mistakes, but hopefully with time, they can get Google(or anyone else) down around the 40-50% range.

                                    2. 1

                                      That’s what “majority of their income” means. Or at least that’s what I intended it to mean!

                                2. 2

                                  You also have the fact they are based in the USA, that means following American laws. Regarding personal datas, they are not very protective about them and even less if you are not an American citizen.

                                  Moreover, they are testing in nightly to use Cloudfare DNS as DNS resolver even if the operating system configure an other. A DNS knows all domaine name resolution you did, that means it know which websiste you visit. You should be able to disable it in about:config but in making this way and not in the Firefox preferences menu, it is clear indication to make it not easily done.

                                  You can also add the fact it is not easy to self host datas stored in your browser. I am not sure they are not sold when there first financial support is Google which have based is revenue from datas?

                                  1. 3

                                    Mozilla does not have your personal data. Whatever they have for sync is encrypted in such a way that it cannot be tied to an account or decrypted.

                                    1. 1

                                      They have my sync data, sync data is personal data so they have my personal data. How do they encrypt it? Do you have any link about how they manage it? In which country is it stored? What is the law about it?

                                      1. 4

                                        Mozilla has your encrypted sync data. They do not have the key to decrypt that data. Your key never leaves your computer. All data is encrypted and decrypted locally in Firefox with a key that only you have.

                                        Your data is encrypted with very strong crypto and the encryption key is derived from your password with a very strong key derivation algorithm. All locally.

                                        The encrypted data is copied to and from Mozilla’s servers. The servers are dumb and do not actually know or do crypto. They just store blobs. The servers are in the USA and on AWS.

                                        The worst that can happen is that Mozilla has to hand over data to some three letter organization, which can then run their supercomputer for a 1000 years to brute force the decryption of your data. Firefox Sync is designed with this scenario in mind.

                                        This of course assuming that your password is not ‘hunter2’.

                                        It is starting to sound like you went through this effort because you don’t trust Mozilla with your data. That is totally fair, but I think that if you had understood the architecture a bit better, you may actually not have decided to self host. This is all put together really well, and with privacy and data breaches in mind. IMO there is very little reason to self host.

                                        1. 1

                                          “The worst that can happen is that Mozilla has to hand over data to some three letter organization, which can then run their supercomputer for a 1000 years to brute force the decryption of your data. Firefox Sync is designed with this scenario in mind.”

                                          That’s not the worst by far. The Core Secrets leak indicated they were compeling via FBI suppliers to put in backdoors. So, they’d either pay/force a developer to insert a weakness that looks accidental, push malware in during an update, or (most likely) just use a browser sploit on the target.

                                          1. 1

                                            In all of those cases, it’s game over for your browser data regardless of whether you use Firefox Sync, Mozilla-hosted or otherwise.

                                            1. 1

                                              That’s true! Unless they rewrite it all in Rust with overflow checking on. And in a form that an info-flow analyzer can check for leaks. ;)

                                          2. 1

                                            As you said, it’s totally fair to not trust Mozilla with data. As part of that, it should always be possible/supported to “self-host”, as a means to keep that as an option. Enough said to that point.

                                            As to “understanding the architecture”, it also comes with appreciating the business practices, ethics, and means to work to the privacy laws of a given jurisdiction. This isn’t being conveyed well by any of the major players, so with the minor ones having to cater to those “big guys”, it’s no surprise that mistrust would be present here.

                                          3. 2

                                            How do they encrypt it?

                                            On the client, of course. (Even Chrome does this the same way.) Firefox is open source, you can find out yourself how exactly everything is done. I found this keys module, if you really care, you can find where the encrypt operation is invoked and what data is there, etc.

                                            1. 2

                                              You don’t have to give it to them. Firefox sync is totally optional, I for one don’t use it.

                                              Country: almost certainly the USA. Encryption: looks like this is what they use: https://wiki.mozilla.org/Labs/Weave/Developer/Crypto

                                          4. 2

                                            The move to Clouflare as dns over https is annoying enough to make me consider other browsers.

                                            You can also add the fact it is not easy to self host datas stored in your browser. I am not sure they are not sold when there first financial support is Google which have based is revenue from datas?

                                            Please, no FUD. :)

                                            1. 3

                                              move to Clouflare

                                              It’s an experiment, not a permanent “move”. Right now you can manually set your own resolver and enable-disable DoH in about:config.

                                        1. 1

                                          Accessibility(a11y) is HARD. One person’s a11y, is not another person’s, and they can conflict sometimes. I don’t know of a conflict with your particular example of having a send button, instead of only the enter key sending, but I’ve come across cases that have.

                                          For example, some deaf people have ASL as their native language, and english skills are not very good, so even using a computer can be difficult, since there is (currently) no way to have anything approaching a native language interface for ASL native signers. There still is no paradigm of computing that is 100% visual, and could be done with ASL. Yes, I’m aware of the ML work around having computers “learn” ASL, but let’s face it, computers can barely handle very simple english(see Siri, Alexa, etc) and there is way more resources dedicated to that.

                                          Also, I use ASL here merely as an example, most countries/geographic areas have their own local signed languages, independent of their native spoken/written language.

                                          a11y is not a check all these check boxes and you are magically perfect, There are general things that help, just like in PwD’s physical world(s); Mobility enhancements tend to be fairly unique for an individual person as an example.

                                          1. 4

                                            Every once in a while someone is trying these scum tactics on their customers. Looking at the track record of Apple this is just natural progression for them.

                                            And I think the Apple users deserve what they get, if they keep buying Apple stuff despite all the effort Apple takes to abuse them.

                                            1. 5

                                              It’s a widespread problem across entire industries. In my experience most large manufacturers are pretty much equally hateful of end-users. Look at the whole tractor debacle, where John Deere and others are making tractors un-repairable by anyone other than John Deere, so they can get you coming and going.

                                            1. 4

                                              I agree with the conclusion at the end: the best part about docker and kubernetes is that the configuration is all written out. You don’t need to remember anything. Everything can be version controlled. When you come back to it in 6 months you don’t have to struggle to remember how you installed x. Otherwise, it’s a pain in the ass.

                                              1. 3

                                                An alternative: just write a nixos config for your service if you’re concerned about your configuration producing an artifact for later.

                                                1. 2

                                                  I like just using ansible for deploying everything, myself.

                                                  And then there’s this mysterious thing called “documentation” that I hear helps too, when returning to an old project after six months or so.

                                                  1. 3

                                                    That sounds like writing. I think you’re trying to trick me into writing.

                                                  2. 1

                                                    Terrible question (because the answer varies), but how long does it take to get up and running using nixos when you don’t know nixos?

                                                    1. 4

                                                      It took me a few days of struggling, but that is because I

                                                      • was too stupid to understand that you can’t compile Haskell on MacOS and have it run on Linux
                                                      • didn’t understand how/why I should pin the nixpkgs version
                                                      • am generally quite stupid

                                                      Overall, I’m super happy with NixOps which in my case provisions EC2 machines with NixOS and deploys my Haskell programs to them, after compiling said programs inside a NixOS Docker container (because the architectures must match).

                                                      I wrote about this a few years ago so whatever I wrote is likely to be somewhat outdated. HTH.

                                                      1. 3

                                                        My problem is, it’s a giant PITA to get anything to talk to stuff inside the nix ecosystem. You end up fighting dependency hell all over again, if you try to talk to stuff that’s packaged up inside /nix. So if you go nix you have to go 100% nix, there is no Oh I’ll use nix for these 10 base dependencies sort of thing. It’s an all or nothing proposition in my experience.

                                                        Also I’ve never been able to get my own stuff to live inside of nix, there is basically zero useful documentation I’ve found about how to keep my private code within nix (but use nixpkgs for dependencies) without it living inside of the nixpkgs ecosystem. Some stuff is either not ready or will never get sourced publicly for whatever reason.

                                                        The ideas behind nix are pretty great, but getting anything to interact with nix has been nothing but hell.

                                                  1. 5

                                                    I learned how computers work from learning assembly (x86).

                                                    1. 2

                                                      Same for me. After learning C, it was eye-opening to me to learn in Assembly about the stack, and especially how the magic of “call a function” is just a primitive (yet brilliant) “push current instruction pointer on stack, then jump to the called address”. Also how local variables are just bytes on the stack too, interleaved with the CALL return addresses. Also how varargs in C are just brutal macros for raw reading from this stack. Those were all amazing discoveries to me. Though apparently, modern Assembly is arguably also a virtual machine — expressed in microcode, and with various leaky abstractions, such as cache layers and speculative execution…

                                                    1. 11

                                                      This is what we do, per policy.

                                                      It depends on what you mean by Scale..

                                                      The easiest way to scale, provided you have $$’s is just throw more hardware at the problem (ram, cpu, etc) You can scale x86 hardware up quite far, and can probably handle most workloads while still sitting on a single machine just fine.

                                                      If you can’t scale on hardware, for whatever reason, then you have to get more creative/work a little harder. The next easiest is figure out what the pain points are for scaling X app, and working on the code/configuration to lower the resources required to do the same amount of work. For custom apps, this could just be something simple like changing how you store X in memory, or it could be putting that work out in a more performant language(like say a cython module if the code was originally in Python), etc. If it’s a java app, it might be just re-configuring the JVM a little bit.

                                                      Next is scaling by increasing the number of running copies of said application. This is where it can get.. hard, and is definitely more unique to each application. For instance scaling nginx across multiple copies is really easy, since it’s pretty much stateless, and has no DB or anything you have to deal with. Scaling postgres across multiple copies gets a lot harder, since it’s a DB, and is very stateful.

                                                      For us, for most web stuff, that is mostly stateless, we scale by just starting more instances of the application. For stuff like Postgres, we scale hardware, versus doing something like citus or offloading reads to a hot-standby or whatever. It gets complicated quickly doing that stuff.

                                                      Generally the easy answer is just throw more physical resources at the problem, that almost always just works to make things faster. You sort of have to know what resource to increase tho (CPU speed, memory, I/O bandwidth, storage, etc). Luckily every OS out there gives you the tools you need to figure that out pretty easily.

                                                      1. 2

                                                        Excellent comment. I agree with all of it. I’ll add one can scale the databases by starting with or migrating to a database that scales horizontally. There’s quite a few of them. This is true for some other stateful services. One might also use load-balancers and/or sharding to keep too much traffic from being on one machine. That gets a bit more complex. There’s still tools and specialists that can help.

                                                        1. 2

                                                          I agree there are stateful databases that scale multi-node better out of the box than Postgres(PG) does. I specifically picked PG as the example here because it doesn’t scale multi-instance/machine out of the box very well.

                                                          Once you get to multi-machine stateful applications, there are a lot of different trade-offs you have to handle, I’m not sure there is any one stateful DB that is multi-machine out of the box that will work for basically every workload the way Postgres does. I.e. PG is basically useful for any DB workload out of the box, provided it can fit on a single physical machine. I’d love examples of general purpose DB’s like PG that are multi-node out of the box with basically no downsides.

                                                          But basically my advice is, once you have a stateful thing you have to go multi-node with, you either need a good, well paid consultant, or good in-house technical staff, as it’s not an easy problem that is very well solved for all use cases. Or to put it another way, avoid multi-node stateful things until absolutely forced to do so, and then go there with eyes wide open, with lots of technical knowledge at hand. Luckily if you do get to that requirement, you probably have lots of $$$ resources to shove at the problem, which helps immensely.

                                                          1. 1

                                                            Well-said again. Yeah, I don’t know if any of those advanced DB’s cover every kind of workload with what drawbacks. I’d want to see experimental comparisons on realistic data.

                                                        2. 1

                                                          You sound like you know a lot about this topic. Hypothetically, if its even possible, what would you do if the load balancer that you put in front of your workers cant handle the incoming load. How do you load balance the load balancer ?

                                                          1. 4

                                                            It’s definitely possible. You have some options, depending on the kind and amounts of traffic we are talking about.

                                                            It depends some on if the Load Balancer(LB) is hardware or software based, etc. Most people are doing software based ones these days.

                                                            Roughly in order of preference, but it’s a debatable order:

                                                            • Ensure you are using a high throughput LB (haproxy comes to mind as a good software based one).
                                                            • Simplify the load balancer configs to the bare minimum, i.e. get them doing the least amount of work possible. The less work you have to do, the more you can do given X resources.
                                                            • Find the bottleneck(s), for most LB workloads the problem is a network I/O problem, not a CPU or memory or disk problem. So ensure your hardware is built for peak I/O (make sure your hardware NICs are configured for IP offloading your kernel is tuned, for I/O, etc).
                                                            • Scale out the LB, with multiple instances. This gets.. interesting, as suddenly you need your traffic to hit more than 1 machine, you can do that a variety of different ways, depending on the actual traffic we are talking about. The easiest is probably just lazy DNS RR (i.e. have your name return multiple A/AAAA records for the host you are load balancing and each IP is a LB).

                                                            Rinse and repeat the above, until you have room to breathe again.

                                                            There are more complicated ways to do this, depending on traffic load. Once you get to needing to scale past simple DNS RR, you probably want a network expert, as it depends on the kinds of traffic (is it all inbound, or is it mostly outbound traffic, etc). I.e. there are some neat things you can do where the LB’s are only handling the inbound traffic and all outbound/return traffic can come directly from the machine(s) doing the work, and not have to go back through the LB, etc. This requires special configuration(s) and is not overly simple to do.

                                                            But it all basically comes down to the general case above. load balancers are generally very stateless, so the general, just run a bunch of copies plan usually works fine.

                                                            If you need your LB’s to do a lot of CPU based work, then you can have a layered fanout model, where you have several layers of LB, each doing a small piece of the total overall work, if you can fit it all within your RTT budget.

                                                            Also if you get extreme you can do user-spaced or unikernel designs where the entire software stack is taken over to do only LB duties, and you can really tune the software stack to do your specific LB duties.

                                                            More than this, and into specific workloads for you, I’d be happy to consult, but it won’t be free ;)

                                                            1. 2

                                                              This is how you do it[1], no cloud required. From what I understand the cloud is essentially composed of what’s in the linked article along with a sophisticated configuration. People’s apps run on this config in the cloud if you go deep enough but it’s being handle by people who worry about the plumbing for you. I really must say that the article I’ve linked is quite excellent and comes with source code. You should check out his other projects as well.

                                                              [1]https://vincent.bernat.ch/en/blog/2018-multi-tier-loadbalancer

                                                              1. 1

                                                                The related question to this is, of course, what do you do of the load balancer dies?

                                                            1. 1

                                                              It’s very satisying to see how they progress on completely new platform for this OS, still keeping the legacy chain running, as OpenVMS is probably one of longest supported consistent platforms with updated being released today.

                                                              But it’s still intriguang - what’s their target? Some hobbyists do port older OSes’ environments or reimplement them to maintain compatibility with legacy software or just the some hard-to-explain „feeling” ffrom such OS, but this mostly happens in desktop/home environment without significant funding (HaikuOS, MorphOS, ReactOS). In this case, we have pure industry-grade OS being funded by large corporation just to be ported onto some weak home computers’ CPU which is already going to be replaced by ARM in next years.

                                                              1. 3

                                                                We ran OpenVMS for our accounting system for > 20 years(I wasn’t here for all of that - I’d guess we started in the early 80’s – to lazy to look it up). We only switched when HP finally gave up and said we are finished. At that time, we decided to move to POSIX-like platforms, as the writing on the wall was pretty clear at the time, OpenVMS was dead/dying and this company hadn’t quite proved they were 100% serious about porting to x86 yet.

                                                                In the time it’s taken these guys to get to first boot, we’ve entirely re-written our application with a Qt GUI, and natively support macOS, Windows and Unix-like OS’s, added a lot more functionality, etc. So in retrospect I don’t think our decision was wrong at all, but OpenVMS was definitely a lot more stable than our current Linux/BSD boxes. Our outages under VMS were measured in minutes per decade, and now it’s minutes per year. But it’s just accounting, we don’t need 100% uptime, so it’s OK with our current stability. Also, now we have to buy new hardware every 5 years instead of once a decade like when we were running OpenVMS. I haven’t done a cost comparison, yet, it would be very interesting to compare our actual cost(s) and see if it’s cheaper this way than when we were under OpenVMS… I’d venture to guess it’s about the same taken over a 10 year period.

                                                                That said, there are plenty that keep sucking the HP support contracts until the bitter end, and marrying into this new company as quickly as possible. Many old organizations still have plenty of OpenVMS around.

                                                                Plus with OpenVMS being ported to x86, it can live under a VM eventually, as x86 VM’s will be here for many, many more decades, even if most of the industry eventually moves to some other chip(like say ARM as you suggest), so OpenVMS won’t die anytime soon, but it will be a niche product for sure.

                                                                1. 1

                                                                  There are a lot of little reasons why, but I much prefer using VMS over UNIX for almost any task given the opportunity.

                                                                  I should add that recent studies have shown, for some of the reasons you mention above, that OpenVMS systems still have lower long-term TCO than Unix and Windows systems and higher reliability.

                                                                2. 2

                                                                  According to Wikipedia and confirmed at the TOP500 site, “as of June 2018, TOP500 supercomputers are now all 64-bit, mostly based on x86-64 CPUs (Intel EMT64 and AMD AMD64 instruction set architecture), with few exceptions”.

                                                                  OpenVMS supports 32 processors and 8TB RAM per node, with 96 nodes per VMScluster, creating systems of 3,072 processors with 768TB of memory. The x86_64 transition will be the fourth supported hardware platform and the upgrade path from the current IA64-based offering.

                                                                  OpenVMS is widely used in many mission-critical applications.

                                                                  1. 2

                                                                    VMS clusters were crazy reliable. All kinds of businesses used them. They were happy to pay good money for the OS, too, long as it kept getting updated. It was one of HP’s highest-profit products at one point. Then they ditched it for some reason. This company took over. They’re updating it mainly for those legacy companies.

                                                                    Search engines’ recent tactics make it hard to find some old articles like the one where customers loved it to death for its reliability and supposed security. I couldn’t find it. You can see here what kinds of critical workloads it’s been running. Wait, I did just find this which describes some of the reliability. I also like the API claim since I said something similar about UNIX/Linux slowly becoming more like mainframes and OpenVMS to meet cloud requirements. Far as I can tell, VMS is still better for what a lot of their users want to do given support for clustering, metering, and distributed apps. It won’t be cheaper, though. Customers used to say there’s a “$” in the command prompt for a reason. ;)

                                                                    1. 3

                                                                      I really disliked VMS, but the VMS cluster/SMP solution was technically better than the competing ones. When Linux went SMP there was an effort to get a similar design, but the standard smp OS design was what was obvious and what the corporate sponsors wanted so ..

                                                                      1. 1

                                                                        There were actually two distinct implementations of VMS multiprocessing — the original implementation was a unique asymmetric design, which changed to symmetric multiprocessing around the VMS 5 timeframe, if my memory is correct.

                                                                        1. 1

                                                                          I may be wrong, but as I recall it, the Galaxy OS design ran multiple instantiations of the same kernel on an SMP machine - there was a single runq per kernel and each OS managed its own address space - although there was a way to reallocte pages. This design had a huge advantage that it did not need the increasingly fine grained and complex and expensive locking systems that are used when a single OS instance has multiple parallel threads of execution.

                                                                          1. 1

                                                                            Yes, OpenVMS Galaxy was a system to partition and run multiple VMS instances on a single server, and these instances could be VMScluster members as well.

                                                                      2. 2

                                                                        It would be really interesting if some of the higher ups in HP someday would ever make public why they ditched OpenVMS.. It’s either a really interesting story, or some number cruncher somewhere just decided it wasn’t worth the porting effort off of the existing, dying hardware line…

                                                                        1. 1

                                                                          It’s always bugged me that I don’t know. The articles before ditching it said it was one of their most profitable products. Then they want to ditch it. My hypothesis was they had two, kind-of-competing products in the same area: OpenVMS Clusters and NonStop. If it’s about reliability for enterprise customers, NonStop is probably the better bet since its assurances go to hardware level. Companies with two products that compete often ditch one. So, they ditched OpenVMS in favor of NonStop.

                                                                          Again, I have no idea why they did it. I just thought that was plausible. The only counter I’ve gotten to that so far is some OpenVMS experts saying they don’t really compete. The customer testimonials and application areas make me think they really do with OpenVMS’s niche being something that doesn’t go down even if it’s expensive. NonStop customers demand that, too. The two aren’t the same in capabilities but lots of overlap exists.

                                                                          Wait, I’m leaving off one part of same hypothesis where Windows- and Linux-based solutions were taking over and more in demand for some of OpenVMS’s market. I know there were lots of conversions. OpenVMS market was a mix of different types of demand. Windows and Linux were bleeding it out of the regular, server market with those kind of capabilities. They were cheaper, more flexible, and had stronger ecosystems. Then, what’s left is high reliability and security, esp reliability. That leaves OpenVMS vs NonStop and Stratus. HP owns two. They kill the weaker one in that niche. Maybe less profitable one, too. So, there’s my full hypothesis.

                                                                          1. 2

                                                                            Looking at the timeline, it seems plausible that when they finally came to terms with the sinking of the Itanic, they could not face the effort of porting.

                                                                            1. 1

                                                                              Damn, I cant believe I ovelooked that. I was mentally separating business and tech sides of this. Ports, esp from RISC to CISC, are potentially a huge cost to technologists. Possible they explained it to business people who saw numbers that made them cancel the upgrade.

                                                                              Doing a quick compare, HP killed VMS on Itanium around June 2013. NonStop was also on Itanium. It was November 2013 when media reported they’d shift to x86. That doesn’t give me anything definitive. It is interesting they started a port on year OpenVMS was cancelled on same architecture, though. Fits your theory a bit but NonStop port had to cost a lot, too. That it involved hardware development weakens your theory a bit.

                                                                              All I can see right now.

                                                                          2. 1

                                                                            Didn’t compaq make the decision? [ nope! ] Decision to abandon Alpha was also dumb.

                                                                            But DEC’s decision to dump the relation database they were developing in Colorado Springs, as it was ready to ship, and to sell it and the highly skilled team to Oracle was stupendously dumb.

                                                                            1. 1

                                                                              I thought a bit about this, and other than speculation mentioned by nickpsecurity and others, it all boils down to the tradition that HP has killing off almost all the acquired products, and often in an underhanded way.

                                                                              My memory of events is such that HP did not want DEC (and other acquision) products competing with their own. Consider how HP acquired and quickly killed Apollo. This seems to be the blueprint - acquire and extinguish - resulting in killing Tru64, and letting VMS languish for awhile as well.

                                                                              For Tru64 users the upgrade path was intially HP-UX on PA-RISC - even as it was rumored that, internally, PA-RISC was already dead, and IA64 was the future.

                                                                              In killing Tru64 on the Alpha, and without even porting many of the advanced features (DECnet, TruCluster, AdvFS, etc.) they ended up selling some former Tru64/Alpha customers new hardware twice in a very short period of time.

                                                                              Now, consider that VMS was eating into HP-UX sales on the low end and NX/NonStop sales on the high-end …

                                                                              To me, HP purchasing Compaq was a much sadder day than Compaq purchasing DEC.

                                                                              To put this in perspective for the UNIX people, the HP acquisition was, to many DEC fans and customers, an exponentially sadder day, by many orders of magnitude, than the acquisition of Sun by Oracle.

                                                                              The VMSI situation - especially with their current team - is a very exciting and the first extremely welcome bit of excellent news. The first in a long while.

                                                                              (I want to add that this might not have been an evil HP plan all along, but it’s a pattern that repeated itself over and over with HP, and was the way that many users felt and perceived the corporate actions. HP did, at some stage, have many dedicated and talented people who loved these systems they rather mercilessly dispatched, and were likely as equally or not upset than the customers.)

                                                                              1. 1

                                                                                I knew someone high up in DEC engineering management for Alpha who had endless battles with DEC’s product management people - they really wanted to sell a few alpha servers at huge markup instead of trying to grow the product sales. He told me after the acquistion DEC were on a call explaining sales targets for alpha servers to Compaq and compaq execs asked “is that in units of 1000 or 10,000 ?” and the DEC guys said “no, just that number”

                                                                        1. 2

                                                                          I’m not sure how I feel about this. I mean It’s great the prices are so low, but there will definitely be costs incurred by Cloudfare to continue this service, plus user support, etc. Those costs are real. With no obvious way to cover these costs, something is going on. Are they selling your data/usage to someone? Are they treating it like a loss-leader and using it to further create a monopoly? Something is going on here, not sure what it is, but the old adage: if something is to good to be true.. seems to come to mind.

                                                                          1. 2

                                                                            I think the key is in the paragraph containing the phrase “we charge a significant premium”. That premium is so significant that they don’t even mention numbers.

                                                                            The rest is just marketing.

                                                                            A lot of companies spend 30% of their income on marketing, a research institute I know spends almost as much on grant applications. Cloudflare spends some percentage of its income on free tiers and services, expecting that the people who benefit from those are also people who write purchase proposals a little later.

                                                                            1. 1

                                                                              My impression of cloudflare is that they’re very cash rich or something and basically trying to become a one stop shop for everything you need to get a site hosted on the web except for the actual hosting. It reminds me of Google about 15 years ago when they had more extremely talented engineers than they knew what to do with and were cranking out projects left and right.

                                                                              1. 1

                                                                                They definitely have some good engineers working there, they are doing some great technical things. That doesn’t make their business plan any easier to swallow. Many people jumped into bed with Google and are now trying to get out of bed and finding the jumping painful, because Google continues to go to great lengths to keep you (and your privacy) with them. Google desperately wants your data in their and only their hoover vacuum, because it’s proven so lucrative. They started off with great intentions 15 years ago and got addicted by a googol of money, to the detriment of Users and arguably the Internet as a whole(think AMP, etc), sadly. (OK technically they don’t have an actual googol of money, but the reference was just too good to pass up, given that was their name origin – as I remember it, back when they still used lego’s for drive bays)

                                                                                I don’t know Cloudfare’s business plan, either short term or long term, and I have no problems with them making money, I just don’t want them to become another Google, that gets sidetracked into harming users for the benefit of another buck.

                                                                            1. 7

                                                                              I would have rather seen the HardenedBSD code just get merged back into FreeBSD, I’m sure there are loads of reasons, but I’ve never managed to see them, their website doesn’t make that clear. I imagine it’s because of mostly non-technical reasons.

                                                                              That said, It’s great that HardenedBSD is now setup to live longer, and I hope it has a great future, as it sits in a niche that only OpenBSD really sits in, and it’s great to see some competition/diversity in this space!

                                                                              1. 13

                                                                                Originally, that’s what HardenedBSD was meant for: simply a place for Oliver and me to collaborate on our clean-room reimplementation of grsecurity to FreeBSD. All features were to be upstreamed. However, it took us two years in our attempt to upstream ASLR. That attempt failed and resulted in a lot of burnout with the upstreaming process.

                                                                                HardenedBSD still does attempt the upstreaming of a few things here and there, but usually more simplistic things: We contributed a lot to the new bectl jail command. We’ve hardened a couple aspects of bhyve, even giving it the ability to work in a jailed environment.

                                                                                The picture looks a bit different today. HardenedBSD now aims to give the FreeBSD community more choices. Given grsecurity’s/PaX’s inspiring history of pissing off exploit authors, HardenedBSD will continue to align itself with grsecurity where possible. We hope to perform a clean-room reimplementation of all publicly documented grsecurity features. And that’s only the start. :)

                                                                                edit[0]: grammar

                                                                                1. 6

                                                                                  I’m sorry if this is a bad place to ask, but would you mind giving the pitch for using HardenedBSD over OpenBSD?

                                                                                  1. 19

                                                                                    I view any OS as simply a tool. HardenedBSD’s goal isn’t to “win users over.” Rather, it’s to perform a clean-room reimplementation of grsecurity. By using HardenedBSD, you get all the amazing features of FreeBSD (ZFS, DTrace, Jails, bhyve, Capsicum, etc.) with state-of-the-art and robust exploit mitigations. We’re the only operating system that applies non-Cross-DSO CFI across the entire base operating system. We’re actively working on Cross-DSO CFI support.

                                                                                    I think OpenBSD is doing interesting things with regards to security research, but OpenBSD has fundamental paradigms may not be compatible with grsecurity’s. For example: by default, it’s not allowed to create an RWX memory mapping with mmap(2) on both HardenedBSD and OpenBSD. However, HardenedBSD takes this one step further: if a mapping has ever been writable, it can never be marked executable (and vice-versa).

                                                                                    On HardenedBSD:

                                                                                    void *mapping = mmap(NULL, getpagesize(), PROT_READ | PROT_WRITE | PROT_EXEC, ...); /* The mapping is created, but RW, not RWX. */
                                                                                    mprotect(mapping, getpagesize(), PROT_READ | PROT_EXEC); /* <- this will explicitly fail */
                                                                                    
                                                                                    munmap(mapping, getpagesize());
                                                                                    
                                                                                    mapping = mmap(NULL, getpagesize(), PROT_READ | PROT_EXEC, ...); /* <- Totally cool */
                                                                                    mprotect(mapping, getpagesize(), PROT_READ | PROT_WRITE); /* <- this will explicitly fail */
                                                                                    

                                                                                    It’s the protection around mprotect(2) that OpenBSD lacks. Theo’s disinclined to implement such a protection, because users will need to toggle a flag on a per-binary basis for those applications that violate the above example (web browsers like Firefox and Chromium being the most notable examples). OpenBSD implemented WX_NEEDED relatively recently, so my thought is that users could use the WX_NEEDED toggle to disable the extra mprotect restriction. But, not many OpenBSD folk like that idea. For more information on exactly how our implementation works, please look at the section in the HardenedBSD Handbook on our PaX NOEXEC implementation.

                                                                                    I cannot stress strongly enough that the above example wasn’t given to be argumentative. Rather, I wanted to give an example of diverging core beliefs. I have a lot of respect for the OpenBSD community.

                                                                                    Even though I’m the co-founder of HardenedBSD, I’m not going to say “everyone should use HardenedBSD exclusively!” Instead, use the right tool for the job. HardenedBSD fits 99% of the work I do. I have Win10 and Linux VMs for those few things not possible in HardenedBSD (or any of the BSDs).

                                                                                    1. 3

                                                                                      So how will JITs work on HardenedBSD? is the sequence:

                                                                                      mmap(PROT_WRITE);
                                                                                      // write data
                                                                                      mprotect(PROT_EXEC);
                                                                                      

                                                                                      allowed?

                                                                                      1. 5

                                                                                        By default, migrating a memory mapping from writable to executable is disallowed (and vice-versa).

                                                                                        HardenedBSD provides a utility that users can use to tell the OS “I’d like to disable exploit mitigation just for this particular application.” Take a look at the section I linked to in the comment above.

                                                                                    2. 9

                                                                                      Just to expound on the different philosophies approach, OpenBSD would never bring ZFS, Bluetooth, etc into the OS, something HardenedBSD does.

                                                                                      OpenBSD has a focus on minimalism, which is great from a maintainability and security perspective. Sometimes that means you miss out on things that could make your life easier. That said OpenBSD still has a lot going for it. I run both, depending on need.

                                                                                      If I remember right, just the ZFS sources by themselves are larger than the entire OpenBSD kernel sources, which gives ZFS a LOT of attack surface. That’s not to say ZFS isn’t awesome, it totally is, but if you don’t need ZFS for a particular compute job, not including it gives you a lot smaller surface for bad people to attack.

                                                                                      1. 5

                                                                                        If I remember right, just the ZFS sources by themselves are larger than the entire OpenBSD kernel sources, which gives ZFS a LOT of attack surface.

                                                                                        I would find a fork of HardenedBSD without ZFS (and perhaps DTrace) very interesting. :)

                                                                                        1. 3

                                                                                          Why fork? Just don’t load the kernel modules…

                                                                                          1. 4

                                                                                            There have been quite a number of changes to the kernel to accommodate ZFS. It’d be interesting to see if the kernel can be made to be more simple when ZFS is fully removed.

                                                                                            1. 1

                                                                                              You may want to take a look at dragonflybsd then.

                                                                                        2. 4

                                                                                          Besides being large, I think what makes me slightly wary of ZFS is that it also has a large interface with the rest of the system, and was originally developed in tandem with Solaris/Illumos design and data structures. So any OS that diverges from Solaris in big or small ways requires some porting or abstraction layer, which can result in bugs even when the original code was correct. Here’s a good writeup of such an issue from ZFS-On-Linux.

                                                                                  1. 5

                                                                                    Working some more on my “Security best practices” pages: https://www.zie.one/en/security/ And going to help the local dog park at their work party.

                                                                                    1. 2

                                                                                      I used migadu.com for a long time, but am switching to self-hosted kolab now, as I want the synced calendars/reminders/etc across devices.

                                                                                      I like migadu because they are 1) outside the US, 2) charge based on USAGE, not on the # of accounts/domains(of which I have a bunch), which is nice. Also they are cheap. But I don’t like that they run a JS based mail server, which is not so fabulous.

                                                                                      1. 2

                                                                                        Seems to fly in the face of the reproducible builds movement :) One could compile it for a reproducible build, ensure happiness and then recompile with this for security I guess.

                                                                                        1. 1

                                                                                          It was actually one of my counterpoints to reproducible builds. Reproducible builds in deployment = reproducible attacks. The diversity approach makes systems as different as possible to counter attacks. In the big picture, the attacks reproducible builds address are rare whereas the attacks diversifying compiles address are really common. Better to optimize for common case. So, diversified builds are better for security than reproducible builds in common case.

                                                                                          So, I pushed the approach recommended by Paul Karger who invented the attack Thompson wrote about later. The early pioneers said we needed (at a minimum) secure SCM, a secure distribution method, safe languages to reduce accidental vulnerabilities, verified toolchains for compiles, and customers build locally from source. Customers should also be able to re-run any analyses, tests, and so on. This was standard practice for systems certified to high-assurance security (Orange Book B3/A1 classes). We have even better tools now for doing those exact things commercially, FOSS, formally-verified, informal-but-lean, and so on. So, we can use what works with reproducible builds still an option esp for debugging.

                                                                                          1. 8

                                                                                            With load-time randomization you can both have and eat that reproducible-build cake.

                                                                                            1. 1

                                                                                              cool idea, Thanks for posting!

                                                                                              1. 1

                                                                                                That’s news to me. Thanks for the tip!

                                                                                              2. 1

                                                                                                It was actually one of my counterpoints to reproducible builds.

                                                                                                Running a build for each deployment is extremely impractical. Also, when binaries are generated and signed centrally you have guarantees that the same binary is being tested by many organization. Finally, different binaries will behave slightly differently, leading to more difficult debugging.

                                                                                                Hence the efforts on randomizing locations at load time.

                                                                                                1. 0

                                                                                                  The existing software that people are buying and using sometimes has long slowdowns on installs, setup, load, and/or update. Building the Pascal-based GEMSOS from source would’ve taken a few seconds on todays hardware. I think that’s pretty practical compared to the above. It’s the slow, overly-complicated toolchains that make things like Gentoo impractical. Better ones could increase number of projects that can build from source.

                                                                                                  Of course, it was just an option: they can have binaries if they want them. The SCM and transport security protect them if developer’s are non-malicious. The rest of the certification requirements attempted to address sloppy and malicous developers. Most things were covered. Load-time randomization can be another option.

                                                                                              3. 1

                                                                                                It looks like the builds are seeded, so it may be possible to reconstruct a pristine image given the seed.

                                                                                              1. 3

                                                                                                While I basically agree here, the problem is, if you are a new developer and you search the internet on how to build a menu for my website, basically all you will get back is using giant JS frameworks that take up gobs of space, instead of the few lines of CSS and HTML5 you need(without any JS) to actually build a menu. I don’t have a good solution to this, but I see this as a major contributor for why this craziness keeps growing in size.

                                                                                                I think it also doesn’t help that when we do get new things like webauthn, but then only get a JS interface to use them. Somewhat forcing our hand to require JS if you want nice things. That doesn’t mean we have to shove 500MB of JS to the user to use webauthn, but we can’t do it with just HTML and a form anymore.

                                                                                                1. 7

                                                                                                  That’s because nobody should need to search the internet for how to make a menu. It’s a list of links. It’s something you learn in the first hour of a lecture on HTML, chapter 1 of a book on HTML.

                                                                                                  You probably neither need nor want to use webauthn. Certainly not yet! It was published as a candidate recommendation this year. Give others a chance to do the experimenting. Web standards used to take 10 years to get implemented. Maybe don’t wait quite that long, but I’m sure you’ll do fine with an <input type="password"> for a few years yet.

                                                                                                  1. 2

                                                                                                    I was just using both as an example, I apologize for not being clear.

                                                                                                    Yes a menu is just a list of links, but most people want drop-down or hamburger menu’s now, and that requires either some CSS or some JS. Again, go looking and all the examples will be in JS, unless you go searching specifically for CSS examples.

                                                                                                    This is true of just about everything you want to do in HTML/Web land, the JS examples are super easy to find, the CSS equivalents are hard to find, and plain HTML examples are super hard to find.

                                                                                                    Anyways, I basically agree webauthn isn’t really ready for production use, but again, both of these were examples, and webauthn just because it’s something I’m currently playing with. You can find lots of new web tech that is essentially JS only, despite it not needing to be, from a technical perspective. This is what I’m saying.

                                                                                                    1. 2

                                                                                                      I understand it’s just an example, but that’s my point really: it’s yet another example of something people way overcomplicate for no good reason. ‘It’s the first google result’ just isn’t good enough. It’s basic competency to actually know what things do and how to do things in HTML and CSS, and not accomplish everything by just blindly copy-pasting whatever the first google result for your task is.

                                                                                                      Web authentication? Sure it’s just an example, but what it’s an example of is people reinventing the wheel. What ‘new’ web technology isn’t just a shitty Javascript version of older web technology that’s worked for decades?

                                                                                                      1. 1

                                                                                                        LOL, “overcomplicate for no good reason” seems to be the entire point of so many JS projects.

                                                                                                        I think we agree more than we disagree.

                                                                                                        New developers have to learn somehow, and from existing sites and examples tend to be a very common way people learn. I agree web developers in general could probably use more learning around CSS and HTML, since there is a LOT there, and they aren’t as easy as people tend to think on the surface.

                                                                                                        Well webauthn has a good reason for existing, We all generally know that plain USER/PASS isn’t good enough anymore, especially when people use such crappy passwords, and developers do such a crappy job of handling and storing authentication information. There are alternative solutions to FIDO U2F/webauthn, but none of it has had much, if any success, when it comes to easy to use, strong 2FA. The best we have is TOTP at this point, and it’s not nearly as strong cryptographically as U2F is. I don’t know of any web technology that’s worked for decades that competes with it. Google has fallen in love with it, and as far as I know, requires it for every employee.

                                                                                                        The closest would probably be mutual/client based TLS cert authentication, but it’s semi-broken in every browser and has been for decades, the UI is miserable, and nobody has ever had a very successful deployment work out long-term (that I’m aware of). I know there was a TLS Cert vendor that played with it, and Debian played with it some, both aimed at very technical audiences, and I don’t think anyone enjoyed it. I’d love to be proven wrong however!

                                                                                                        Mutual TLS auth works better outside of the browser, things like PostgreSQL generally get it right, but it’s still far from widely deployed/easy to use, even after having decades of existence.

                                                                                                        That said, I’m sure there are tons of examples of crappy wackiness invented in web browser land. I have to be honest, I don’t make a living in web development land, and try to avoid it for the most part, so I could be wrong on some of this.

                                                                                                  2. 1

                                                                                                    Maybe check out Dynamic Drive. I used to get CSS-based effects off it for DHTML sites in early 2000’s. I haven’t dug into the site to see if they still have lots of CSS vs Javascript, though. A quick glance at menus shows CSS menus are still in there. If there’s plenty CSS left, you can give it to web developers to check out after teaching them the benefits of CSS over Javascript.

                                                                                                    I also noticed the first link on the left is an image optimizer. Using one is recommended in the article.

                                                                                                    EDIT: The eFluid menu actually replaces the site’s menu during the demo. That’s neat.

                                                                                                    1. 3

                                                                                                      An interesting project that shows how modern layouts can be built without JavaScript is W3C.CSS.

                                                                                                      /cc @milesrout @zie

                                                                                                      1. 3

                                                                                                        Thanks for the link. Those are nice demos. I’d rather them not have the editor, though, so I easily see it in full screen. Could have a separate link for the source or live editing as is common elsewhere.

                                                                                                  1. 7

                                                                                                    Github has large businesses paying lots for their service. They dont need to blast you with adverts and subscription reminders. A news website has very few paying customers anymore so what else are they to do. You can give lectures about keeping ui clean and using no js all day but that doesn’t bring money in.

                                                                                                    The real issue is how these clickbait NYT articles keep getting to the top of HN when they usually have little to no substance.

                                                                                                    1. 7

                                                                                                      One can deliver ads, without needing 10’s of MB’s of data. I’m not a fan of the ads either, but to say ads are required to be giant ram and bandwidth sucking monsters is blatantly false. That’s not a requirement for ads, that is just where we have gotten as the ad industry has infected the Internet, not a technical requirement for advertising.

                                                                                                      But even with MB’s of ad-infested insanity plastered everywhere, the rest of the site doesn’t also need to add to the craziness with MB’s of junk for what is essentially a page of text.

                                                                                                      1. 2

                                                                                                        This is true, we could replicate the ad infestedness of a website with a tiny fraction of the processing power needed. But I think it’s more complex than that. To understand how to fix the problem we need to know how we got to the problem.

                                                                                                        Who is making websites slow? Is it the site developers, the ad network developers or the managers? It’s quite clear that most of the time its the ad network scripts that slow websites down as the web jumps to warp speed with an ad blocker but why do some websites (Primarily news websites) have 1000 different ad network scripts and tracking scripts? If you ask the site developers they would probably tell you they hate it and wish they could remove most of them but its the managers that request tracking script #283 be added and the devs don’t get much of a say in it so posting an article on a developer focused website telling them something they already agree with is next to useless.

                                                                                                        This is the primary reason AMP makes websites fast. Not because there is any tech magic that makes it fast. But because it lets developers say to managers “We can’t do that. It’s impossible on AMP”

                                                                                                        There is also another case where big websites are slow and horrible to use on mobile. Twitter and reddit are like this. I think here the reason is to make you use the mobile app so telling them to make their websites work faster will also do nothing because they don’t want you using the website.