Threads for David_Gerard

    1. 8

      Looks like the beginning of the end of the fantastic progress in tech that’s resulted from a relative lack of regulation.

      Also, probably, a massive spike in grift jobs as people are hired to ensure compliance.

      1. 97

        Looks like the beginning of the end for the unnecessarey e-waste provoked by companies forcing obselence and anti-consumer patterns made possible by the lack of regulations.

        1. 7

          It’s amazing that no matter how good the news is about a regulation you’ll always be able to find someone to complain about how it harms some hypothetical innovation.

        2. 6

          Sure. Possibly that too - although I’d be mildly surprised if the legislation actually delivers the intended upside, as opposed to just delivering unintended consequences.

          And just to be clear: the unintended consequences here include the retardation of an industry that’s delivered us progress from 8 bit micros with 64KiB RAM to pervasive Internet and pocket supercomputers in one generation.

          Edited to add: I run a refurbished W540 with Linux Mint as a “gaming” laptop, a refurbished T470s with FreeBSD as my daily driver, a refurbished Pixel 3 with Lineage as my phone, and a PineTime and Pine Buds Pro. I really do grok the issues with the industry around planned obsolescence, waste, and consumer hostility.

          I just still don’t think the cost of regulation is worth it.

          1. 66

            I’m a EU citizen, and I see this argument made every single time the EU passes a new legislation affecting tech. So far, those worries never materialized.

            I just can’t see why having removeable batteries would hinder innovation. Each company will still want to sell their prducts, so they will be pressed to find creative ways to have a sleek design while meeting regulations.

            Do you think Apple engineers are not capeable of designing AirPods that have a removeable battery? The battery is even in the stem, so it could be as simple as having the stem be de-tacheable. It was just simpler to super-glue everything shut, plus it comes with the benefit of forcing consumers to upgrade once their AirPods have unusable battery life.

            1. 15

              Also, if I’m not mistaken it is about service-time replaceable battery, not “drop-on-the-floor-and-your-phone-is-in-6-parts” replaceable as in the old times.

              1. 13

                In the specific case of batteries, yep, you’re right. The legislation actually carves special exception for batteries that’s even more manufacturer-friendly than other requirements – you can make devices with batteries that can only be replaced in a workshop environment or by a person with basic repair training, or even restrict access to batteries to authorised partners. But you have to meet some battery quality criteria and a plausible commercial reason for restricting battery replacement or access to batteries (e.g. an IP42 or, respectively, IP67 rating).

                Yes, I know, what about the extra regulatory burden: said battery quality criteria are just industry-standard rating methods (remaining capacity after 500 and 1,000 cycles) which battery suppliers already provide, so manufacturers that currently apply the CE rating don’t actually need to do anything new to be compliant. In fact the vast majority of devices on the EU market are already compliant, if anyone isn’t they really got tricked by whoever’s selling them the batteries.

                The only additional requirements set in place is that fasteners have to be resupplied or reusable. Most fasteners that also perform electrical functions are inherently reusable (on account of being metallic) so in practice that just means, if your batteries are fastened with adhesive, you have to provide that (or a compatible) adhesive for the prescribed duration. As long as you keep making devices with adhesive-fastened batteries that’s basically free.

                i.e. none of this requires any innovation of any kind – in fact the vast majority of companies active on the EU market can keep on doing exactly what they’re doing now modulo exclusive supply contracts (which they can actually keep if they want to, but then they have to provide the parts to authorised repair partners).

              2. 5

                Man do I ever miss those days though. Device not powering off the way I’m telling it to? Can’t figure out how to get this alarm app to stop making noise in this crowded room? Fine - rip the battery cover off and forcibly end the noise. 100% success rate.

            2. 5

              So far, those worries never materialized.

              You’re enjoying those ubiquitous “This site uses cookies” pop-ups, then?

              Do you think Apple engineers are not capeable of designing AirPods that have a removeable battery?

              Of course they’re capable, but there are always trade-offs. I am very skeptical that something as tiny and densely packed as an AirPod could be made with removeable parts without becoming a lot less durable or reliable, and/or more expensive. Do you have the hardware/manufacturing expertise to back up your assumptions?

              I don’t know where the battery is in an AirPod, but I do know that lithium-polymer batteries can be molded into arbitrary shapes and are often designed to fill the space around the other components, which tends to make them difficult or impossible to remove.

              1. 24

                You’re enjoying those ubiquitous “This site uses cookies” pop-ups, then?

                Those aren’t required by law; those happen when a company makes customer-hostile decisions and wants to deflect the blame to the EU for forcing them to be transparent about their bad decisions.

                1. 2

                  Huh? Using cookies is “user-hostile”? I mean, I actually remember using the web before cookies were a thing, and that was pretty user-unfriendly: all state had to be kept in the URL, and if you hit the Back button it reversed any state, like what you had in your shopping cart.

                  1. 17

                    That kind of cookie requires no popup though, only the ones used to shared info with third parties or collect unwarranted information.

                  2. 14

                    I can’t believe so many years later people still believe the cookie law applies to all cookies.

                    Please educate yourself: the law explicitly applies only to cookies used for tracking and marketing purposes, not for funcional purposes.

                    The law also specified that the banner must have a single button to “reject all cookies”, so any website that ask you to go trought a complex flow to reject your consent is not compliant.

                    1. 1

                      It requires consent for all but “strictly necessary” cookies. According to the definitions on that page, that covers a lot more than tracking and marketing. For example “ choices you have made in the past, like what language you prefer”, or “statistics cookies” whose “ sole purpose is to improve website function”. Definitely overreach.

              2. 5

                I don’t know where the battery is in an AirPod

                we do know it and it’s a Li-Ion button cell https://guide-images.cdn.ifixit.com/igi/QG4Cd6cMiYVcMxiE.large

                1. 4

                  FWIW this regulation doesn’t apply to the Airpods. But if for some reason it ever did, and based on the teardown here, the main obstacle for compliance is that the battery is behind a membrane that would need to be destroyed. A replaceable fastener that would allow it to be vertically extracted, for example, would allow for cheap compliance. If Apple got their shit together and got a waterproof rating, I think they could actually claim compliance without doing anything else – it looks like the battery is already replaceable in a workshop environment (someone’s done it here) and you can still do that.

                  (But do note that I’m basing this off pictures, I never had a pair of AirPods – frankly I never understood their appeal)

            3. 4

              Sure, Apple is capable of doing it. And unlike my PinePhone the result would be a working phone ;)

              But the issue isn’t a technical one. It’s the costs involved in finding those creative ways, to hiring people to ensure compliance, and especially to new entrants to the field.

              It’s demonstrably untrue that the costs never materialise. Speak to business owners about the cost of regulatory compliance sometime. Red tape is expensive.

              1. 38

                What is the alternative?

                Those companies are clearly engaging in anti-consumer behavior, actively trying to stop right to repair and more.

                The industry demonstrated to be incapable of self-regulating, so I think it’s about time to force their hand.

                This law can be read in its entirety in a few minutes, it’s reasonable and to the point.

                1. 5

                  What is the alternative?

                  Is that a trick question? The alternative is not regulating, and it’s delivered absolutely stunning results so far. Again: airgapped 8 bit desk toys to pocket supercomputers with pervasive Internet in a generation.

                  Edited to add: and this isn’t a new problem they’re dealing with; Apple has been pulling various customer-hostile shit moves since Jobs’ influence outgrew Woz’s:

                  But once again, Steve Jobs objected, because he didn’t like the idea of customers mucking with the innards of their computer. He would also rather have them buy a new 512K Mac instead of them buying more RAM from a third-party.

                  (from https://www.folklore.org/StoryView.py?project=Macintosh&story=Diagnostic_Port.txt )

                  Edited to add, again: I mean this without snark, coming from a country (Australia) that despite its larrikin reuptation is astoundingly fond of red tape, regulation, conformity, and conservatism. But I think there’s a reason Silicon Valley is in America, and not either Europe or Australasia, and it’s cultural as much as it’s economic.

                  1. 37

                    Did a standard electric plug also stiffle innovation? Or mandates about a car having to fit on a lane?

                    Laws are the most important safety lines we have, otherwise companies would just optimize for profit in malicious ways.

                  2. 15

                    But I think there’s a reason Silicon Valley is in America, and not either Europe or Australasia, and it’s cultural as much as it’s economic.

                    The reason is literally buckets and buckets of money from defense spending. You should already know this.

                    1. 23

                      It’s not just that. Lots of people have studied this and one of the key reasons is that the USA has a large set of people with disposable income that all speaks the same language. There was a huge amount of tech innovation in the UK in the ’80s and ’90s (contemporaries of Apple, Microsoft, and so on) but very few companies made it to international success because their US competitors could sell to a market (at least) five times the size before they needed to deal with export rules or localisation. Most of these companies either went under because US companies had larger economies of scale or were bought by US companies.

                      The EU has a larger middle class than the USA now, I believe, but they speak over a dozen languages and expect products to be translated into their own locales. A French company doesn’t have to deal with export regulations to sell in Germany, but they do need to make sure that they translate everything (including things like changing decimal separators). And then, if they want to sell in Spain, they need to do all of that again. This might change in the next decade, since LLM-driven machine translation is starting to be actually usable (helped for the EU by the fact that the EU Parliament proceedings are professionally translated into all member states’ languages, giving a fantastic training corpus).

                      The thing that should worry American Exceptionalists is that the middle class in China is now about as large as the population of America and they all read the same language. A Chinese company has a much bigger advantage than a US company in this regard. They can sell to at least twice as many people with disposable income without dealing with export rules or localisation than a US company.

                    2. 2

                      That’s one of the reasons but it’s clearly not sufficient. Other countries have spent up on taxpayer’s purse and not spawned a silicon valley of their own.

                      1. 1

                        “Spent up”? At anything near the level of the USA??

                        1. 1

                          Yeah.

                          https://en.m.wikipedia.org/wiki/History_of_computing_in_the_Soviet_Union

                          But they failed basically because of the Economic Calculation Problem - even with good funding and smart people, they couldn’t manufacture worth a damn.

                          https://en.m.wikipedia.org/wiki/Economic_calculation_problem

                          Money - wherever it comes from - is an obvious prerequisite. But it’s not sufficient - you need a (somewhat at least) free economy and a consequently functional manufacturing capacity. And a culture that rewards, not kills or jails, intellectual independence.

                  3. 15

                    But I think there’s a reason Silicon Valley is in America

                    The Wikipedia cites a number of factors:

                    Silicon Valley was born through the intersection of several contributing factors, including a skilled science research base housed in area universities, plentiful venture capital, permissive government regulation, and steady U.S. Department of Defense spending.

                    Government spending tends to help with these kind of things. As it did for the foundations of the Internet itself. Attributing most of the progress we had so far to lack of regulation is… unwarranted at best.

                    Besides, it’s not like anyone is advocating we go back in time and regulate the industry to prevent current problems without current insight. We have specific problems now that we could easily regulate without imposing too much a cost on manufacturers: there’s a battery? It must be replaceable by the end user. Device pairing prevents third party repairs? Just ban it. Or maybe keep it, but provide the tools to re-pair any new component. They’re using proprietary connectors? Consider standardising it all to USB-C or similar. It’s a game of whack-a-mole, but at least this way we don’t over-regulate.

                    1. 15

                      Beware comrade, folks will come here to make a slippery slope arguments about how requiring battery replacements & other minor guard rails towards consumer-forward, e-waste-reducing design will lead to the regulation of everything & fully stifle all technological progress.

                      What I’d be more concerned is how those cabals weaponize the legislation in their favor by setting and/or creating the standards. I look at how the EU is saying all these chat apps need to quit that proprietary, non-cross-chatter behavior. Instead of reverting their code to the XMPP of yore, which is controlled by a third-party committee/community, that many of their chats were were designed after, they want to create a new standard together & will likely find a way to hit the minimum legal requirements while still keeping a majority of their service within the garden or only allow other big corporate players to adapt/use their protocol with a 2000-page specification with bugs, inconsistencies, & unspecified behavior.

                    2. 3

                      ’s a game of whack-a-mole, but at least this way we don’t over-regulate.

                      Whack enough moles and over-regulation is exactly what you get - a smothering weight of decades of incremental regulation that no-one fully comprehends.

                      One of the reason the tech industry can move as fast as it does is that it hasn’t yet had the time to accumulate this - or the endless procession of grifting consultants and unions that burden other industries.

                      1. 7

                        It isn’t exactly what you get. You’re not here complaining about the fact that your mobile phone electrocutes you or gives you RF burns of stops your TV reception - because you don’t realise that there is already lots of regulation from which you benefit. This is just a bit more, not the straw-man binary you’re making out it to be.

                      2. 4

                        I am curious however: do you see the current situation as tenable? You mention above that there are anti-consumerist practices and the like, but also express concern that regulation will quickly slippery slope away, but I am curious if you think the current system where there is more and more lock in both on the web and in devices can be pried back from those parties?

                  4. 11

                    The alternative is not regulating, and it’s delivered absolutely stunning results so far.

                    Why are those results stunning? Is there any reason to think that those improvements were difficult in the first place?

                    There are a lot of economic incentives, and it was a new field of science application, that has benefited from so many other fields exploding at the same time.

                    It’s definitely not enough to attribute those results to the lack of regulation. The “utility function” might have just been especially ripe for optimization in that specific local area, with or without regulations.

                    Now, we see monopolies appearing again and associated anti-consumer decisions to the benefit of the bigger players. This situation is well-known – tragedy of the common situations in markets is never fixed by the players themselves.

                    Your alternative of not doing anything hinges on the hope that your ideologically biased opinion won’t clash with reality. It’s naive to believe corporations not to attempt to maximize their profits when they have an opportunity.

                  5. 9

                    Well, I guess I am wrong then, but I prefer slower progress, slower computers, and generating less waste than just letting companies do all they want.

                  6. 7

                    Is that a trick question? The alternative is not regulating, and it’s delivered absolutely stunning results so far. Again: airgapped 8 bit desk toys to pocket supercomputers with pervasive Internet in a generation.

                    This did not happen without regulation. The FCC exists for instance. All of the actual technological development was funded by the government, if not conducted directly by government agencies.

                  7. 5

                    As a customer, I react to this by never voluntarily buying Apple products. And I did buy a Framework laptop when it first became available, which I still use. Regulations that help entrench Apple and make it harder for new companies like Framework to get started are bad for me and what I care about with consumer technology (note that Framework started in the US, rather than the EU, and that in general Europeans immigrate to the US to start technology companies rather than Americans immigrating to the EU to do the same).

                    1. 3

                      As a customer, I react to this by never voluntarily buying Apple products.

                      Which is reasonable. Earlier albertorestifo spoke about legislation “forc[ing] their hand” which is a fair summary - it’s the use of force instead of voluntary association.

                      (Although I’d argue that anti-circumvention laws, etc. prescribing what owners can’t do with their devices is equally wrong, and should also not be a thing).

                      1. 15

                        The problem with voluntary association is that most people don’t know what they’re associating with when they buy a new product. Or they think short term, only to cry later when repairing their device is more expensive than buying a new one.

                        There’s a similar tension at play with GitHub’s rollout of mandatory 2FA: it really annoys me, adding TOTP didn’t improve my security by one iota (I already use KeepassXC), but many people do use insecure passwords, and you can’t tell by looking at their code. (In this analogy GitHub plays the role of the regulator.)

                        1. 1

                          The problem with voluntary association is that most people don’t know what they’re associating with when they buy a new product.

                          I mean, you’re not wrong. But don’t you feel like the solution isn’t to infantilise people by treating them like they’re incapable of knowing?

                          For what it’s worth I fully support legislation enforcing “plain $LANGUAGE” contracts. Fraud is a species of violence; people should understand what they’re signing.

                          But by the same token, if people don’t care to research the repair costs of their devices before buying them … why is that a problem that requires legislation?

                          1. 3

                            But don’t you feel like the solution isn’t to infantilise people by treating them like they’re incapable of knowing?

                            They’re not, if we give them access to the information, and there are alternatives. If all the major phone manufacturers produce locked down phones with impossible to swap components (pairing), that are supported only for 1 year, what’s people to do? If people have no idea how secure the authentication of someone is on GitHub, how can they make an informed decision about security?

                            But by the same token, if people don’t care to research the repair costs of their devices before buying them

                            When important stuff like that is prominently displayed on the package, it does influence purchase decisions. So people do care. But more importantly, a bad score on that front makes manufacturers look bad enough that they would quickly change course and sell stuff that’s easier to repair, effectively giving people more choice. So yeah, a bit of legislation is warranted in my opinion.

              2. 21

                But the issue isn’t a technical one. It’s the costs involved in finding those creative ways, to hiring people to ensure compliance, and especially to new entrants to the field.

                I’m not a business owner in this field but I did work at the engineering (and then product management, for my sins) end of it for years. I can tell you that, at least back in 2016, when I last did any kind of electronics design:

                1. Ensuring “additional” compliance is often a one-time cost. As an EE, you’re supposed to know these things and keep up with them, you don’t come up with a schematic like they taught you in school twenty years ago and hand it over to a compliance consultant to make it deployable today. If there’s a major regulatory change you maybe have to hire a consultant once. More often than not you already have one or more compliance consultants on your payroll, who know their way around these regulations long before they’re ratified (there’s a long adoption process), so it doesn’t really involve huge costs. The additional compliance testing required in this bill is pretty slim and much of it is on the mechanical side. That is definitely not one-time but trivially self-certifiable, and much of the testing time will likely be cut by having some of it done on the supplier end (for displays, case materials etc.) – where this kind of testing is already done, on a much wider scale and with a lot more parameters, so most partners will likely cover it cost-free 12 months from now (and in the next couple of weeks if you hurry), and in the meantime, they’ll do it for a nominal “not in the statement of work” fee that, unless you’re just rebranding OEM products, is already present on a dozen other requirements, too.
                2. An embarrassing proportion of my job consisted not of finding creative ways to fit a removable battery, but in finding creative ways to keep a fixed battery in place while still ensuring adequate cooling and the like, and then in finding even more creative ways to design (and figure out the technological flow, help write the servicing manual, and help estimate logistics for) a device that had to be both testable and impossible to take apart. Designing and manufacturing unrepairable, logistically-restricted devices is very expensive, too, it’s just easier for companies to hide its costs because the general public doesn’t really understand how electronics are manufactured and what you have to do to get them to a shop near them.
                3. The intrinsic difficulty of coming up with a good design isn’t a major barrier of entry for new players any more than it is for anyone. Rather, most of them can’t materialise radically better designs because they don’t have access to good suppliers and good manufacturing facilities – they lack contacts, and established suppliers and manufacturers are squirrely about working with them because they aren’t going to waste time on companies that are here today and they’re gone tomorrow. When I worked on regulated designs (e.g. medical) that had long-term support demands, that actually oiled some squeaky doors on the supply side, as third-party suppliers are equally happy selling parts to manufacturers or authorised servicing partners.

                Execs will throw their hands in the air and declare anything super-expensive, especially if it requires them to put managers to work. They aren’t always wrong but in this particular case IMHO they are. The additional design-time costs this bill imposes are trivial, and at least some of them can be offset by costs you save elsewhere on the manufacturing chain. Also, well-ran marketing and logistics departments can turn many of its extra requirements into real opportunities.

            4. 1

              I don’t want any of these things more than I want improved waterproofing. Why should every EU citizen that has the same priorities I do not be able to buy a the device they want?

              1. 4

                Then I have some very good news for you!

                The law doesn’t prohibit waterproof devices. In fact, it makes clear expections for such cases. It mandates that the battery must be repleaceable without specialized tools and by any competent shop, it doesn’t mandate a user-replaceable battery.

          2. 16

            And just to be clear: the unintended consequences here include the retardation of an industry that’s delivered us progress from 8 bit micros with 64KiB RAM to pervasive Internet and pocket supercomputers in one generation.

            I don’t want to defend the bill (I’m skeptical of politicians making decisions on… just about anything, given how they operate) but I don’t think recourse to history is entirely justified in this case.

            For one thing, good repairability and support for most of (if not throughout) a device’s useful lifetime was the norm for a good part of that period, and it wasn’t a hardware-only deal. Windows 3.1 was supported until 2001, almost twice longer than the bill demands. NT 3.1 was supported for seven years, and Windows 95 was supported for 6. IRIX versions were supported for 5 (or 7?) years, IIRC.

            For another, the current state of affairs is the exact opposite of what deregulation was supposed to achieve, so I find it equally indefensible on (de)regulatory grounds alone. Manufacturers are increasingly convincing users to upgrade not by delivering better and more capable products, but by making them both less durable and harder to repair, and by restricting access to security updates. Instead of allowing businesses to focus on their customers’ needs rather than state-mandated demands, it’s allowing businesses to compensate their inability to meet customer expectations (in terms of device lifetime and justified update threshold) by delivering worse designs.

            I’m not against that on principle but I’m also not a fan of footing the bill for all the extra waste collection effort and all the health hazards that generates. Private companies should be more than well aware that there’s no such thing as a free lunch.

            1. 6

              For one thing, good repairability and support for most of (if not throughout) a device’s useful lifetime was the norm for a good part of that period

              Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.

              For another, the current state of affairs is the exact opposite of what deregulation was supposed to achieve

              Deregulation is the “ground state”.

              It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.

              Conveniently, it fosters an environment of rapid growth in wealth, capability, and efficiency. Because when companies do what you’re suggesting - nerfing their products to drive revenue - customers go elsewhere.

              Which is why you’ll see the greatest proponents of regulation are the companies themselves, these days. Anti-circumvention laws, censorship laws that are only workable by large companies, Government-mandated software (e.g. Korean banking, Android and iOS only identity apps in Australia) and so forth are regulation aimed against customers.

              So there’s a part of me that thinks companies are reaping what they sowed, here. But two wrongs don’t make a right; the correct answer is to deregulate both ends.

              1. 14

                Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.

                Maybe. Most early home computers were expensive. People expected them to last a long time. In the late ’80s, most of the computers that friends of mine owned were several years old and lasted for years. The BBC Model B was introduced in 1981 and was still being sold in the early ‘90s. Schools were gradually phasing them out. Things like the Commodore 64 of Sinclair Spectrum had similar longevity. There were outliers but most of them were from companies that went out of business and so wouldn’t be affected by this kind of regulation.

                It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.

                That’s not really true. It assumes a balance of power that is exactly equal between companies and consumers.

                Companies force people to upgrade by tying in services to the device and then dropping support in the services for older products. No one buys a phone because they want a shiny bit of plastic with a thinking rock inside, they buy a phone to be able to run programs that accomplish specific things. If you can’t safely connect the device to the Internet and it won’t run the latest apps (which are required to connect to specific services) because the OS is out of date, then they need to upgrade the OS. If they can’t upgrade the OS because the vendor doesn’t provide an upgrade and no one else can because they have locked down the bootloader (and / or not documented any of the device interfaces), then consumers have no choice to upgrade.

                Conveniently, it fosters an environment of rapid growth in wealth, capability, and efficiency. Because when companies do what you’re suggesting - nerfing their products to drive revenue - customers go elsewhere.

                Only if there’s another option. Apple controls their app store and so gets a 30% cut of app revenue. This gives them some incentive to support old devices, because they can still make money from them, but they will look carefully at the inflection point where they make more money from upgrades than from sales to older devices. For other vendors, Google makes money from the app store and they don’t[1] and so once a handset has shipped, the vendor has made as much money as they possibly can. If a vendor makes a phone that gets updates longer, then it will cost more. Customers don’t see that at point of sale, so they don’t buy it. I haven’t read the final version of this law, one of the drafts required labelling the support lifetime (which research has shown will have a big impact - it has a surprisingly large impact on purchasing decisions). By moving the baseline up for everyone, companies don’t lose out by being the one vendor to try to do better.

                Economists have studied this kind of market failure for a long time and no one who actually does research in economics (i.e. making predictions and trying to falsify them, not going on talk shows) has seriously proposed deregulation as the solution for decades.

                Economies are complex systems. Even Adam Smith didn’t think that a model with a complete lack of regulation would lead to the best outcomes.

                [1] Some years ago, the Android security team was complaining about the difficulties of support across vendors. I suggested that Google could fix the incentives in their ecosystem by providing a 5% cut of all app sales to the handset maker, conditional on the phone running the latest version of Android. They didn’t want to do that because Google maximising revenue is more important than security for users.

                1. 5

                  Economists have studied this kind of market failure for a long time and no one who actually does research in economics (i.e. making predictions and trying to falsify them, not going on talk shows) has seriously proposed deregulation as the solution for decades.

                  That is remarkably untrue. At least one entire school of economics proposes exactly that.

                  In fact, they dismiss the entire concept of market failure, because markets exist to provide pricing and a means of exchange, nothing more.

                  “Market failure” just means “the market isn’t producing the prices I want”.

                  1. 8

                    Is the school of economics you’re talking about actual experimenters, or are they arm-chair philosophers? I trust they propose what you say they propose, but what actual evidence do they have?

                    I might sound like I’m dismissing an entire scientific discipline, but economics have shown strong signs of being extremely problematic on this front for a long time. One big red flag for instance is the existence of such long lived “schools”, which are a sign of dogma more than they’re a sign of sincere inquiry.

                    In fact, they dismiss the entire concept of market failure, because markets exist to provide pricing and a means of exchange, nothing more.

                    Assuming there’s no major misunderstanding, there’s another red flag right there: markets have a purpose now? Describing what markets do is one thing, but ascribing purpose to them presupposes some sentient entity put them there with intent. Which may very well be true, but then I would ask a historian, not an economist.

                    Now looking at the actual purpose… the second people exchange stuff for a price, there’s a pricing and a means of exchange. Those are the conditions for a market. Turning it around and making them the “purpose” of market is cheating: in effect, this is saying markets can’t fail by definition, which is quite unhelpful.

                    1. 8

                      I might sound like I’m dismissing an entire scientific discipline, but economics have shown strong signs of being extremely problematic on this front for a long time.

                      This is why I specifically said practicing economists who make predictions. If you actually talk to people who do research in this area, you’ll find that they’re a very evidence-driven social science. The people at the top of the field are making falsifiable predictions based on models and refining their models when they’re wrong.

                      Economics is intrinsically linked to politics and philosophy. Economic models are like any other model: they predict what will happen if you change nothing or change something, so that you can see whether that fits with your desired outcomes. This is why it’s so often linked to politics and philosophy: Philosophy and politics define policy goals, economics lets you reason about whether particular actions (or inactions) will help you reach those goals. Mechanics is linked to engineering in the same way. Mechanics tells you whether a set of materials arranged in a particular way will be stable, engineering says ‘okay, we want to build a bridge’ and then uses models from mechanics to determine whether the bridge will fall down. In both cases, measurement errors or invalid assumptions can result in the goals not being met when the models say that they should be and in both cases these lead to refinements of the models.

                      One big red flag for instance is the existence of such long lived “schools”, which are a sign of dogma more than they’re a sign of sincere inquiry.

                      To people working in the field, the schools are just shorthand ways of describing a set of tools that you can use in various contexts.

                      Unfortunately, most of the time you hear about economics, it’s not from economists, it’s from people who play economists on TV. The likes of the Cato and Mises institutes in the article, for example, work exactly the wrong way around: they decide what policies they want to see applied and then try to tweak their models to justify those policies, rather than looking at what goals they want to see achieved and using the models to work out what policies will achieve those goals.

                      I really would recommend talking to economists, they tend to be very interesting people. And they hate the TV economists with a passion that I’ve rarely seen anywhere else.

                      Assuming there’s no major misunderstanding, there’s another red flag right there: markets have a purpose now?

                      Markets absolutely have a purpose. It is always a policy decision whether to allow a market to exist. Markets are a tool that you can use to optimise production to meet demand in various ways. You can avoid markets entirely in a planned economy (but please don’t, the Great Leap Forward or the early days of the USSR give you a good idea of how many people will die if you do). Something that starts as a market can end up not functioning as a market if there’s a significant power imbalance between producers and consumers.

                      Markets are one of the most effective tools that we have for optimising production for requirements. Precisely what they will optimise for depends a lot on the shape of the market and that’s something that you can control with regulation. The EU labelling rules on energy efficiency are a great example here. The EU mandated that white goods carry labels showing the score that they got on energy-efficiency tests. The labelling added information to customer and influenced their purchasing decisions. This created demand for more energy-efficient goods and the market responded by providing them. The eventually regulations banned goods below a certain efficiency rating but it was largely unnecessary because the market adjusted and most things were A rated or above when F ratings were introduced. It worked so well that they had to recalibrate the scale.

                      1. 3

                        Unfortunately, most of the time you hear about economics, it’s not from economists, it’s from people who play economists on TV

                        I can see how such usurpation could distort my view.

                        Markets absolutely have a purpose. It is always a policy decision whether to allow a market to exist.

                        Well… yeah.

                        Precisely what [markets] will optimise for depends a lot on the shape of the market and that’s something that you can control with regulation. The EU labelling rules on energy efficiency are a great example here.

                        I love this example. Plainly shows that often people don’t make the choices they do because they don’t care about such and such criterion, they do so because they just can’t measure the criterion even if they cared. Even a Libertarian should admit that making good purchase decisions requires being well informed.

                        You can avoid markets entirely in a planned economy (but please don’t, the Great Leap Forward or the early days of the USSR give you a good idea of how many people will die if you do).

                        To be honest I do believe some select parts of the economy should be either centrally planned or have a state provider that can serve everyone: roads, trains, water, electricity, schools… Yet at the same time, other sectors probably benefit more from a Libertarian approach. My favourite example is the Internet: the fibre should be installed by public instances (town, county, state…), and bandwidth rented at a flat rate — no discount for bigger volumes. And then you just let private operators rent the bandwidth however they please, and compete among each other. The observed results in the few places in France that followed this plan (mostly rural areas big private providers didn’t want to invest in) was a myriad of operators of all sizes, including for-profit and non-profit ones (recalling what Benjamin Bayart said of the top of my head). This gave people an actual choice, and this diversity inherently makes this corner of the internet less controllable and freer.

                        A Libertarian market on top of a Communist infrastructure. I suspect we can find analogues in many other domains.

                        1. 2

                          My favourite example is the Internet: the fibre should be installed by public instances (town, county, state…), and bandwidth rented at a flat rate — no discount for bigger volumes. And then you just let private operators rent the bandwidth however they please, and compete among each other.

                          This is great initially, but it’s not clear how you pay for upgrades. Presumably 1 Gb/s fibre is fine now, but at some point you’re going to want to migrate everyone to 10 Gb/s or faster, just as you wanted to upgrade from copper to fibre. That’s going to be capital investment. Does it come from general taxation or from revenue raised on the operators? If it’s the former, how do you ensure it’s equitable, if it’s the latter then you’re going to want to amortise the cost across a decade and so pricing sufficiently that you can both maintain the current infrastructure and save enough to upgrade to as-yet-unknown future technology can be tricky.

                          The problem with private ownership of utilities is that it encourages rent seeking and cutting costs at the expense of service and capital investment. The problem with public ownership is that it’s hard to incentivise efficiency improvements. It’s important to understand the failure modes of both options and ideally design hybrids that avoid the worst problems of both. The problem is that most politicians start with ‘privatisation is good’ or ‘privatisation is bad’ as an ideological view and not ‘good service, without discrimination, at an affordable price is good’ and then try to figure out how to achieve it.

                          1. 3

                            That’s going to be capital investment.

                            Yes, that’s the point: the most capitalistic something is (extreme example: nuclear power plants), the more difficult private enterprises will want to invest in it, and if they do, the more they will want to extract rent from their investment. There’s also the thing about fibre (or copper) being naturally monopolistic, at least if you have a mind to conserve resources and not duplicate lines all over the place.

                            So there is a point where people must want the thing badly enough that the town/county/state does the investment itself. As it does for any public infrastructure.

                            Not saying this would be easy though. The difficulties you foresee are spot on.

                            The problem with public ownership is that it’s hard to incentivise efficiency improvements.

                            Ah, I see. Part of this can be solved by making sure the public part is stable, and the private part easy to invest on. For instance, we need boxes and transmitters and whatnot to lighten up the fibre. I speculate that those boxes are more liable to be improved than the fibre itself, so perhaps we could give them to private interests. But this is reaching the limits of my knowledge of the subject, I’m not informed enough to have an opinion on where the public/private frontier is best placed.

                            The problem is that most politicians start with ‘privatisation is good’ or ‘privatisation is bad’ as an ideological view and not ‘good service, without discrimination, at an affordable price is good’ and then try to figure out how to achieve it.

                            Good point, I’ll keep that in mind.

                            1. 2

                              Yes, that’s the point: the most capitalistic something is (extreme example: nuclear power plants), the more difficult private enterprises will want to invest in it, and if they do, the more they will want to extract rent from their investment

                              There’s a lot of nuance here. Private enterprise is quite good at high-risk investments in general (nuclear power less so because it’s regulated such that you can’t just go bankrupt and walk away, for good reasons). A lot of interesting infrastructure were possible because private investors gambled and a lot of them lost a big pile of money. For example, the Iridium satellite phone network cost a lot to deliver and did not recoup costs. The initial investors lost money, but then the infrastructure was for sale at a bargain price and so it ended up being operated successfully. It’s not clear to me how public investment could have matched that (without just throwing away tax payers’ money).

                              This was the idea behind some of the public-private partnership things that the UK government pushed in the ‘90s (which often didn’t work, you can read a lot of detailed analyses of why not if you search for them): you allow the private sector to take the risk and they get a chunk of the rewards if the risk pays off but the public sector doesn’t lose out if the risk fails. For example, you get a private company to build a building that you will lease from them. They pay all of the costs. If you don’t need the building in five years time then it’s their responsibility to find another tenant. If the building needs unexpected repairs, they pay for them. If everything goes according to plan, you pay a bit more for the building space than if you’d built, owned, and operated it yourself. And you open it out to competitive bids, so if someone can deliver at a lower cost than you could, you save money.

                              Some procurement processes have added variations on this where the contract goes to the second lowest bidder or they the winner gets paid what the next-lowest bidder asked for. The former disincentivises stupidly low bids (if you’re lower than everyone else, you don’t win), the latter ensures that you get paid as much as someone else thought they could deliver, reducing risk to the buyer. There are a lot of variations on this that are differently effective and some economists have put a lot of effort into studying them. Their insights, sadly, are rarely used.

                              So there is a point where people must want the thing badly enough that the town/county/state does the investment itself. As it does for any public infrastructure.

                              The dangerous potholes throughout UK roads might warn you that this doesn’t always work.

                              1. 3

                                A lot of interesting infrastructure were possible because private investors gambled and a lot of them lost a big pile of money.

                                Good point. We need to make sure that these gambles stay gambles, and not, say, save the people who made the bad choice. Save their company perhaps, but seize it in the process. We don’t want to share losses while keeping profits private — which is what happens more often than I’d like.

                                This was the idea behind some of the public-private partnership things that the UK government pushed in the ‘90s (which often didn’t work, you can read a lot of detailed analyses of why not if you search for them)

                                The intent is good indeed, and I do have an example of a failure in mind: water management in France. Much of it is under a private-public partnership, with Veolia I believe, and… well there are a lot of leaks, a crapton of water is wasted (up to 25% in some of the worst cases), and Veolia seems to be making little more than a token effort to fix the damn leaks. Probably because they don’t really pay for the loss.

                                The dangerous potholes throughout UK roads might warn you that this doesn’t always work.

                                It’s often a matter oh how much money you want to put in. Public French roads are quite good, even if we exclude the super highways (those are mostly privatised, and I reckon in even better shape). Still, point taken.

                      2. 1

                        The EU labelling rules on energy efficiency are a great example here. The EU mandated that white goods carry labels showing the score that they got on energy-efficiency tests. The labelling added information to customer and influenced their purchasing decisions. This created demand for more energy-efficient goods and the market responded by providing them.

                        Were they actually successful, or did they only decrease operating energy use? You can make a device that uses less power because it lasts half as long before it breaks, but then you have to spend twice as much power manufacturing the things because they only last half as long.

                        I don’t disagree with your comment, by the way. Although, part of the problem with planned economies was that they just didn’t have the processing power to manage the entire economy; modern computers might make a significant difference, the only way to really find out would be to set up a Great Leap Forward in the 21st century.

                        1. 3

                          Were they actually successful, or did they only decrease operating energy use?

                          I may be misunderstanding your question but energy ratings aren’t based on energy consumption across the device’s entire lifetime, they’re based on energy consumption over a cycle of operation of limited duration, or a set of cycles of operations of limited duration (e.g. a number of hours of functioning at peak luminance for displays, a washing-drying cycle for washer-driers etc.). You can’t get a better rating by making a device that lasts half as long.

                          Energy ratings and device lifetimes aren’t generally linked by any causal relation. There are studies that suggest the average lifetime for (at least some categories of) household appliances have been decreasing in the last decades, but they show about the same thing regardless of jurisdiction (i.e. even those without labeling or energy efficiency rules, or with different labeling rules) and it’s a trend that started prior to energy efficiency labeling legislation in the EU.

                          1. 2

                            You can’t get a better rating by making a device that lasts half as long.

                            Not directly, but you can e.g. make moving parts lighter/thinner, so they take less power to move but break sooner as a result of them being thinner.

                            but they show about the same thing regardless of jurisdiction (i.e. even those without labeling or energy efficiency rules, or with different labeling rules) and it’s a trend that started prior to energy efficiency labeling legislation in the EU.

                            That’s good to hear.

                            1. 4

                              Not directly, but you can e.g. make moving parts lighter/thinner, so they take less power to move but break sooner as a result of them being thinner.

                              For household appliances, energy ratings are given based on performance under full rated capacities. Moving parts account for a tiny fraction of that in washing machines and washer-driers, and for a very small proportion of the total operating power in dishwashers and refrigerators (and obviously no proportion for electronic displays and lighting sources). They’re also given based on measurements of KWh/cycle rounded to three decimal places.

                              I’m not saying making some parts lighter doesn’t have an effect for some of the appliances that get energy ratings, but that effect is so close to the rounding error that I doubt anyone is going to risk their warranty figures for it. Lighter parts aren’t necessarily less durable, so if someone’s trying to get a desired rating by lightening the nominal load, they can usually get the same MTTF with slightly better materials, and they’ll gladly swallow some (often all) of the upfront cost just to avoid dealing with added uncertainty of warranty stocks.

                  2. 10

                    Much like orthodox Marxism-Leninism, the Austrian School describes economics by how it should be, not how it actually is.

              2. 10

                Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.

                The major problem with orphans was lack of access to proprietary parts – they were otherwise very repairable. The few manufacturers that can afford proprietary parts today (e.g. Apple) aren’t exactly at risk of going under, which is why that fear is all but gone today.

                I have like half a dozen orphan boxes in my collection. Some of them were never sold on Western markets, I’m talking things like devices sold only on the Japanese market for a few years or Soviet ZX Spectrum clones. All of them are repairable even today, some of them even with original parts (except, of course, for the proprietary ones, which aren’t manufactured anymore so you can only get them from existing stocks, or use clone parts). It’s pretty ridiculous that I can repair thirty year-old hardware just fine but if my Macbook croaks, I’m good for a new one, and not because I don’t have (access to) equipment but because I can’t get the parts, and not because they’re not manufactured anymore but because no one will sell them to me.

                It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.

                Deregulation was certainly meant to achieve a lot of things in particular. Not just general outcomes, like a more competitive landscape and the like – every major piece of deregulatory legilslation has had concrete goals that it sought to achieve. Most of them actually achieved them in the short run – it was conserving these achievements that turned out to be more problematic.

                As for companies not being able to force customers not to upgrade, repair or tinker with their devices, that is really not true. Companies absolutely can and do force customers to not upgrade or repair their devices. For example, they regularly use exclusive supply deals to ensure that customers can’t get the parts they need for it, which they can do without leveraging any government-mandated regulation.

                Some of their means are regulation-based – e.g. they take them customers or third-parties to court (see e.g. Apple. For most devices, tinkering with them in unsupported ways is against the ToS, too, and while there’s always doubt on how much of that is legally enforceable in each jurisdiction out there, it still carries legal risk, in addition to the weight of force in jurisdictions where such provisions have actually been enforced.

                This is very far from a state of minimal initiation of force. It’s a state of minimal initiation of force on the customer end, sure – customers have little financial power (both individually and in numbers, given how expensive organisation is), so in the absence of regulation they can leverage, they have no force to initiate. But companies have considerable resources of force at their disposal.

      2. 23

        It’s not like there was heavy progress the last 10 years on smartphone hardware.

        Since 2015 every smartphone is the same as the previous model, with a slightly better camera and a better chip. I don’t see how the regulation is making progress more difficult. IMHO it will drive innovation, phones will have to be made more durable.

        1. 16

          And, for most consumers, the better camera is the only thing that they notice. An iPhone 8 is still massively overpowered for what a huge number of consumers need, and it was released five years ago. If anything, I think five years is far too short a time to demand support.

          1. 3

            Until that user wants to play a mobile game–in which like PC hardware specs were propelled by gaming, so is the mobile market driven by games which I believe is now the most dominant gaming platform.

            1. 8

              I don’t think the games are really that CPU / GPU intensive. It’s definitely the dominant gaming platform, but the best selling games are things like Candy Crush (which I admit to having spent far too much time playing). I just upgraded my 2015 iPad Pro and it was fine for all of the games that I tried from the app store (including the ones included with Netflix and a number of the top-ten ones). The only thing it struggled with was the Apple News app, which seems to want to preload vast numbers of articles and so ran out of memory (it had only 2 GiB - the iPhone version seems not to have this problem).

              The iPhone 8 (five years old) has an SoC that’s two generations newer than my old iPad, has more than twice as much L2 cache, two high-performance cores that are faster than the two cores in mine (plus four energy-efficient cores, so games can have 100% use of the high-perf ones), and a much more powerful GPU (Apple in-house design replacing a licensed PowerVR one in my device). Anything that runs on my old iPad will barely warm up the CPU/GPU on an iPhone 8.

              1. 3

                I don’t think the games are really that CPU / GPU intensive

                But a lot are intensive & enthusiasts often prefer it. But still those time-waster types and e-sports tend to run on potatoes to grab the largest audience.

                Anecdotally, I recently was reunited with my OnePlus 1 (2014) running Lineage OS, & it was choppy at just about everything (this was using the apps from when I last used it (2017) in airplane mode so not just contemporary bloat) especially loading map tiles on OSM. I tried Ubuntu Touch on it this year (2023) (listed as great support) & was still laggy enough that I’d prefer not to use it as it couldn’t handle maps well. But even if not performance bottle-necked, efficiency is certainly better (highly doubt it’d save more energy than the cost of just keeping an old device, but still).

                1. 4

                  My OnePlus 5T had an unfortunate encounter with a washing machine and tumble dryer, so now the cellular interface doesn’t work (everything else does). The 5T replaced a first-gen Moto G (which was working fine except that the external speaker didn’t work so I couldn’t hear it ring. I considered that a feature, but others disagreed). The Moto G was slow by the end. Drawing maps took a while, for example. The 5T was fine and I’d still be using it if I hadn’t thrown it in the wash. It has an 8-core CPU, 8 GiB of RAM, and an Adreno 540 GPU - that’s pretty good in comparison to the laptop that I was using until very recently.

                  I replaced the 5T with a 9 Pro. I honestly can’t tell the difference in performance for anything that I do. The 9 Pro is 4 years newer and doesn’t feel any faster for any of the apps or games that I run (and I used it a reasonable amount for work, with Teams, Word, and PowerPoint, which are not exactly light apps on any platform). Apparently the GPU is faster and the CPU has some faster cores but I rarely see anything that suggests that they’re heavily loaded.

                2. 2

                  Original comment mentioned iPhone 8 specifically. Android situation is completely different.

                  Apple had a significant performance lead for a while. Qualcomm just doesn’t seem to be interested in making high-end chips. They just keep promising that their next-year flagship will be almost as fast as Apple’s previous-year baseline. Additionally there are tons of budget Mediatek Androids that are awfully underpowered even when new.

                  1. 1

                    Flagship Qualcomm chips for Android chips been fine for years & more than competitive once you factor in cost. I would doubt anyone is buying into either platform purely based on performance numbers anyhow versus ecosystem and/or wanting hardware options not offered by one or the other.

                    1. 1

                      competitive once you factor in cost

                      That’s what I’m saying — Qualcomm goes for large volumes of mid-range chips, and does not have products on the high end. They aren’t even trying.

                      BTW, I’m flabbergasted that Apple put M1 in iPads. What a waste of a powerful chip on baby software.

                      1. 1

                        They aren’t even trying.

                        Uh, what about their series 8xx SoC’s? On paper they’re comparable to Apple’s A-series, it’s the software that usually is worse.

                          1. 1

                            Still a massacre.

                            Yeah, true, I could have checked myself. Gap is even bigger right now than two years ago.

                            Q is in self-inflicted rut enabled by their CDMA stranglehold. Samsung is even further behind because their culture doesn’t let them execute.

                            https://cdn.arstechnica.net/wp-content/uploads/2022/09/iPhone-14-Geekbench-5-single-Android-980x735.jpeg

                            https://cdn.arstechnica.net/wp-content/uploads/2022/09/iPhone-14-Geekbench-Multi-Android-980x735.jpeg

                            1. 1

                              Those are some cherry-picked comparisons. Apple release on a different cadence. You check right now, & S23 beats up on it as do most flagship now. If you blur the timing, it’s all about the same.

                      2. 1

                        It would cost them more to develop and commission to fabrication of a more “appropriate” chip.

                      3. 1

                        The high-end Qualcomm is fine. https://www.gsmarena.com/compare.php3?idPhone1=12082&idPhone3=11861&idPhone2=11521#diff- (may require viewing as a desktop site to see 3 columns)

                        With phones of the same tier released before & after you can see benchmarks are all close as is battery life. Features are wildly different tho since Android can offer a range of different hardware.

                3. 2

                  efficiency is certainly better (highly doubt it’d save more energy than the cost of just keeping an old device, but still).

                  It doesn’t for laptops[1], so I doubt it would for smartphones either.

                  [1] https://www.lowtechmagazine.com/2020/12/how-and-why-i-stopped-buying-new-laptops.html

          2. 3

            I think you’re really discounting the experiences of consumers to say they don’t notice the UI and UX changes made possible on the Android platform by improvements in hardware capabilities.

            1. 4

              I notice that you’re not naming any. Elsewhere in the thread, I pointed out that I can’t tell the difference between a OnePlus 5T and a 9 Pro, in spite of them being years apart in releases. They can run the same version of Android and the UIs seem identical to me.

              1. 2

                I didn’t think I had to. Android 9, 10, 11, 12 have distinct visual styles, and between vendors this distinction can further - this may be less apparent on OnePlus as they use their own OxygenOS (AOSP upstream ofc) (or at least, used to), but consumers notice even if they can’t clearly relate what they’ve noticed.

                1. 4

                  I’m using LimeageOS and both phones are running updated versions of the OS. Each version has made the settings app more awful but I can’t point to anything that’s a better UI or anything that requires newer hardware. Rendering the UI barely wakes up the GPU on the older phone. So what is new, better, and is enabled by newer hardware?

                  1. 1

                    I can’t argue either way for “better”, I’m not the market. Newer hardware generally has better capability for graphics processing, leading to more reactive displays at higher refresh rates, and enabling compositing settings and features that otherwise wouldn’t run at an acceptable frame rate.

                    LineageOS is an AOSP build specifically designed to run fast and support legacy hardware, and is designed to look the same on all that hardware. It’s not a fair comparison to what people like to see with smartphone interfaces and launchers etc.

                    1. 5

                      I can’t argue either way for “better”, I’m not the market. Newer hardware generally has better capability for graphics processing, leading to more reactive displays at higher refresh rates, and enabling compositing settings and features that otherwise wouldn’t run at an acceptable frame rate.

                      So please name one of them. A 2017 phone can happily run a 1080p display at a fast enough refresh that I’ve no idea what it is because it’s faster than my eyes can detect, with a full compositing UI. Mobile GPUs have been fast enough to composite every UI element from a separate texture, running complex pixel shaders on them, for ten years. OS X started doing this on laptops over 15 years ago, with integrated Intel graphics cards that are positively anaemic in comparison to anything in a vaguely recent phone. Android has provided a compositing UI toolkit from day one. Flutter, with its 60FPS default, runs very happily on a 2017 phone.

                      LineageOS is an AOSP build specifically designed to run fast and support legacy hardware, and is designed to look the same on all that hardware. It’s not a fair comparison to what people like to see with smartphone interfaces and launchers etc.

                      If it helps, I’m actually using the Microsoft launcher on both devices. But, again, you’re claiming that there are super magic UI features that are enabled by new hardware without saying what they are.

        2. 4

          IMHO it will drive innovation

          All innovation isn’t equal. Innovation that isn’t wanted by customers or their suppliers is malinvestment - a waste of human capacity, wealth, and time.

          1. 22

            Innovation that isn’t wanted by customers or their suppliers is malinvestment - a waste of human capacity, wealth, and time.

            What makes you think that this innovation is not wanted by customers?

            There is innovation that is wanted by customers, but manufacturers don’t provide it because it goes against their interest. I think it’s a lie invisible-hand-believers tell themselves when claiming that customers have a choice between a fixable phone and a glued-phone with an appstore. Of course customers will chose the glued-phone with an app store, because they want a usable phone first. But this doesn’t mean they don’t want a fixable phone, it means that they were given a Hobson’s choice

            1. 5

              but manufacturers don’t provide it because it goes against their interest.

              The light-bulb cartel is the single worst example you could give; incandescent light-bulbs are dirt-cheap to replace and burning them hotter ends up improving the quality of their light (i.e. color) dramatically, while saving more in reduced power bills than they cost from shorter lifetimes. This 30min video by Technology Connections covers the point really well.

            2. 1

              What makes you think that this innovation is not wanted by customers?

              Okay, that was sloppy of me.

              “Not wanted more than any of the other features on offer.”

              “Not wanted enough to motivate serious investment in a competitor.”

              That last is most telling.

      3. 15

        This cynical view is unwarranted in the case of EU, which so far is doing pretty well avoiding regulatory capture.

        EU has a history of actually forcing companies to innovate in important areas that they themselves wouldn’t want to, like energy efficiency and ecological impact. And their regulations are generally set to start with realistic requirements, and are tightened gradually.

        Not everything will sort itself out with consumers voting with their wallets. Sometimes degenerate behaviors (like vendor lock-in, planned obsolescence, DRM, spyware, bricking hardware when subscription for it expires) universally benefit companies, so all choices suck in one way or another. There are markets with high barriers to entry, especially in high-end electronics, and have rent-seeking incumbents that work for their shareholders’ interests, not consumers.

      4. 7

        Ecodesign worked out wonderfully for vacuum cleaners, but that’s an appliance that hasn’t meaningfully changed since the 1930s. (You could argue that stick vacuum cleaners are different, but ecodesign certainly didn’t prevent them from entering the market)

        The smartphone market has obviously been stagnating for a while, so it’ll be interesting to see if ecodesign can shake it up.

        1. 17

          Ecodesign worked out wonderfully for vacuum cleaners, but that’s an appliance that hasn’t meaningfully changed since the 1930s

          I strongly disagree here. They’ve changed massively since the ’90s. Walking around a vacuum cleaner shop in the ’90s, you had two choices of core designs. The vast majority had a bag that doubled as an air filter, pulling air through the bag and catching dust on the way. This is more or less the ’30s design (though those often had separate filters - there were quite a lot of refinements in the ’50s and ’60s - in the ’30s they were still selling ones that required a central compressor in the basement with pneumatic tubes that you plugged the vacuum cleaner into in each room).

          Now, if you buy a vacuum cleaner, most of them use centrifugal airflow to precipitate heavy dust and hair, along with filters to catch the finer dust. Aside from the fact that both move air using electric motors, this is a totally different design to the ’30s models and to most of the early to mid ’90s models.

          More recently, cheap and high-density lithium ion batteries have made cordless vacuums actually useful. These have been around since the ‘90s but they were pointless handheld things that barely functioned as a dustpan and brush replacement. Now they’re able to replace mains-powered ones for a lot of uses.

          Oh, and that’s not even counting the various robot ones that can bounce around the floor unaided. These, ironically, are the ones whose vacuum-cleaner parts look the most like the ’30s design.

          1. 12

            Just to add to that, the efficiency of most electrical home appliances has improved massively since the early ‘90s. With a few exceptions, like things based on resistive heating, which can’t improve much because of physics (but even some of those got replaced by devices with alternative heating methods) contemporary devices are a lot better in terms of energy efficiency. A lot of effort went into that, not only on the electrical end, but also on the mechanical end – vacuum cleaners today may look a lot like the ones in the 1930s but inside, from materials to filters, they’re very different. If you handed a contemporary vacuum cleaner to a service technician from the 1940s they wouldn’t know what to do with it.

            Ironically enough, direct consumer demand has been a relatively modest driver of ecodesign, too – most consumers can’t and shouldn’t be expected to read power consumption graphs, the impact of one better device is spread across at least a two months’ worth of energy bills, and the impact of better electrical filtering trickles down onto consumers, so they’re not immediately aware of it. But they do know to look for energy classes or green markings or whatever.

            1. 13

              But they do know to look for energy classes or green markings or whatever.

              The eco labelling for white goods was one of the inspirations for this law because it’s worked amazingly well. When it was first introduced, most devices were in the B-C classification or worse. It turned out that these were a very good nudge for consumers and people were willing to pay noticeably more for higher-rated devices, to the point that it became impossible to sell anything with less than an A rating. They were forced to recalibrate the scheme a year or two ago because most things were A+ or A++ rated.

              It turns out that markets work very well if customers have choice and sufficient information to make an informed choice. Once the labelling was in place, consumers were able to make an informed choice and there was an incentive for vendors to provide better quality on an axis that was now visible to consumers and so provided choice. The market did the rest.

              1. 1

                Labeling works well when there’s a somewhat simple thing to measure to get the rating of each device - for a fridge it’s power consumption. It gets trickier when there’s no easy way to determine which of two devices is “better” - what would we measure to put a rating on a mobile phone or a computer?

                I suppose the main problem is that such devices are multi-purpose - do I value battery life over FLOPS, screen brightness over resolution, etc. Perhaps there could be a multi-dimensional rating system (A for battery life, D for gaming performance, B for office work, …), but that gets unpractical very quickly.

                1. 6

                  There’s some research by Zinaida Benenson (I don’t have the publication to hand, I saw the pre-publication results) on an earlier proposal for this law that looked at adding two labels:

                  • The number of years that the device would get security updates.
                  • The maximum time between a vulnerability being disclosed and the device getting the update.

                  The proposal was that there would be statutory fines for devices that did not comply with the SLA outlined in those two labels but companies are free to put as much or as little as they wanted. Her research looked at this across a few consumer good classes and used the standard methodology where users were shown a small number of devices with different specs and different things on these labels and then asked to pick their preference. This was then used to vary price, features, and security SLA. I can’t remember the exact numbers but she found that users consistently were willing to select higher priced things with better security guarantees, and favoured them over some other features.

          2. 1

            All the information I’ve read points to centrifugal filters not being meaningfully more efficient or effective than filter bags, which is why these centrifugal cylones are often backed up by traditional filters. Despite what James Dyson would have us believe, building vacuum cleaners is not like designing a Tokamak. I’d use them as an example of a meaningless change introduced to give consumers an incentive to upgrade devices that otherwise last decades.

            Stick (cordless) vacuums are meaningfully different in that the key cleaning mechanism is no longer suction force. The rotating brush provides most of the cleaning action, coupled with a (relatively) weak suction provided by the cordless engines. This makes them vastly more energy-efficient, although this is probably cancelled out by the higher impact of production, and the wear and tear on the components.

      5. 6

        It also might be a great opportunity for innovation in modular design. Say, Apple is always very proude when they come up with a new design. Remember a 15 min mini-doc on their processes when they introduced unibody macbooks? Or 10 min video bragging about their laminated screens?

        I don’t see why it can’t be about how they designed a clever back cover that can be opened without tools to replace the battery and also waterproof. Or how they came up with a new super fancy screen glass that can survive 45 drops.

        Depending on how you define “progress” there can be a plenty of opportunities to innovate. Moreover, with better repairability there are more opportunities for modding. Isn’t it a “progress” if you can replace one of the cameras on your iPhone Pro with, say, infrared camera? Definitely not a mainstream feature to ever come to mass-produced iPhone but maybe a useful feature for some professionals. With available schematics this might have a chance to actually come to market. There’s no chance for it to ever come to a glued solid rectangle that rejects any part but the very specific it came with from the factory.

      6. 4

        Phones have not made meaningful progress since the first few years of the iphone. Its about time

      7. 3

        That’s one way to think about it. Another is that shaping markets is one of the primary jobs of the government, and a representative government – which, for all its faults, the EU is – delegates this job to politics. And folks make a political decision on the balance of equities differently, and … well, they decide how the markets should look. I don’t think that “innovation” or “efficiency” at providing what the market currently provides is anything like a dispositive argument.

      8. 2

        There’s a chance that tech companies start to make EU-only hardware.

      9. 2

        This overall shift will favor long term R&D investments of the kind placed before our last two decades of boom. It will improve innovation in the same way that making your kid eats vegetables improves their health. This is necessary soil for future booms.

    2. 5
    3. 11

      What I’d be more interested in seeing would be “things that people think are 10x but are actually -10x”.

      1. 25

        Tactical tornado engineering is a great example of this:

        Almost every software development organization has at least one developer who takes tactical programming to the extreme: a tactical tornado. The tactical tornado is a prolific programmer who pumps out code far faster than others but works in a totally tactical fashion. When it comes to implementing a quick feature, nobody gets it done faster than the tactical tornado. In some organizations, management treats tactical tornadoes as heroes. However, tactical tornadoes leave behind a wake of destruction. They are rarely considered heroes by the engineers who must work with their code in the future. Typically, other engineers must clean up the messes left behind by the tactical tornado, which makes it appear that those engineers (who are the real heroes) are making slower progress than the tactical tornado.

        From “A Philosophy of Software Design” by John Ousterhout. https://www.goodreads.com/author/quotes/14019088.John_Ousterhout

        1. 5

          …I totally recognize this in one of my coworkers. XD I think we have successfully channeled them into a place where their unrestrained enthusiasm gets applied towards the forces of good with a minimum of fallout: front line troubleshooting, proof-of-concept testing and customer demos. The fast pace and tactical absorption are very useful when things need to be solved Right Now, and then afterwards more cautious engineers can wade through the wreckage to cherry-pick lessons learned and scoop up the more interesting bash scripts to be turned into real tools and upstreamed.

          To their credit, they also recognize this is their MO and tend to stay away from development that would have deeper or wider impacts on core systems, and prefer to enjoy the endless novel problem-solving of operations-y stuff.

          1. 6

            This is definitely one of the roles where folks like that can shine. But it isn’t a panacea.

            A person in a similar role once told me that they wrote their own implementation of a cryptographic algorithm because “[the platform used in the product] didn’t allow importing the existing npm package for it” and that it wasn’t a problem because “they tested the implementation with all the standard test vectors and it passed all tests.”

            It took me two hours to explain the difference between “this implementation has no security vulnerabilities” and “we don’t know of any security vulnerabilities in this implementation.”

            Thankfully, I caught it early enough that it didn’t cause any serious issues. It would have been really bad if we had shipped it to a customer and it turned out being problematic in some unforeseen way.

            1. 3

              oh ho ho wow, good catch! I had a similar experience once explaining that we weren’t going to ship a telemetry-monitoring tool to our customers that consisted of a bash script calling curl on undocumented HTTP endpoints. But I think your story wins. At least they did test their impl with all the standard test vectors?

              1. 3

                They did. But standard test vectors are proof of functionality, not safety.

                For example, test vectors didn’t protect the Java 15-18 ECDSA implementation from accepting (0, 0) as a valid signature for any message: https://twitter.com/tqbf/status/1516570590211153922

                Now, Oracle can just go shrug, stuff happens and fix it, but I was at a small-ish startup at the time and wasn’t willing to bet our future on being able to do the same. Not to mention that it would have much more likely been (in part) my mess to clean up, and certainly not the tactical tornado’s.

        2. 3

          I’ve seen people promoted for writing a lot of code. In particular, the kind of developer who seems to have a mindset of ‘never do in 5 lines of code what you could do in 200’. They look amazingly productive because they’ve added a load of new features to the product and the manager doesn’t notice that the rest of the team is less productive because they’re spending all of their time fixing bugs in this person’s code.

        3. 3

          these guys have long been a frickin pain in my backside. Mr. Minimum Viable Product, Except Not Even Actually Viable. one example left four years ago and we’re still finding messes he left.

          1. 1

            If one of these people is in a code base right at the start sometimes you can never fix it. I remember starting a job and working on a particular API model. I asked who the SME was and someone told me “Well, I think it’s you now.” It was probably the most important data model in the system and one of the most complex ones. It was an absolute rat’s nest or JavaScript. That was 5 years ago and I don’t think it ever got better. I don’t think they’ll ever fix it.

        4. 2

          Too real. Whenever I hear the word “pragmatic” in a programming discussion I get tech debt PTSD.

        5. 2

          I like to think this is about time at place. There are times when going into tornado mode is super useful: prototyping, grenade-diving, etc.

          But you have to own the maintenance of your own shit.

    4. 10

      This post has a curious absence of even mentioning distrust of Google with yet more of our personal data, which was a major theme of discussion in all venues I saw this issue mentioned in, and certainly seemed to play a major part in distrust of an opt-out configuration.

      1. 15

        Because they’ve been very clear: it doesn’t collect personal data. It’s designed to determine how and when people use various language features, anonymously.

        1. 3

          Yes, because Google is not a company that would like to collect more and more data overtime, because it is so trustworthy that a ton of people rejected this.

        2. 4

          The post claims to summarise the discussion. It glaringly does not summarise the discussion.

          Your comment doesn’t actually address what I said, it addresses something else entirely. Do you have an answer to what I said?

          1. 7

            I’m not understanding what you’re angry about. I agree with mperham that it’s quite a stretch to consider which language features are being used by a random golang toolchain installation as “personal” information.

            I also don’t see any claim to summarize the discussion in the original blog post.

            Seems like you just want to be angry at Google and yell at anyone who dares defend them, even when (in this case) Google seems to be trying to do the “right” thing.

          2. 2

            I couldn’t find where it claims to summarize the discussion. Can you quote the passage that leads you to believe that?

            1. 3

              The second paragraph. You’re being disingenuous.

              1. 2

                I notice you didn’t provide a quote and instead accused me of bad faith. Cool.

                1. 3

                  See my top-level comment for an explicit examination of the second paragraph. I would characterize it as manufacturing consent or astroturfing; a top-down plan is being described as community-approved.

      2. 1

        I assumed other topics fell under the statement in the second paragraph that said:

        In the GitHub discussion, there were some unconstructive trolls with no connection to Go who showed up for a while

    5. 69

      “people think programming in Excel is programming”

      I mean, it is Turing-complete with =LAMBDA. I find it a bit distressing when programmers, especially influential ones, try to denigrate an environment or language they don’t like as “not real programming”. This reminded me of an article on contempt culture.

      there is no way to have a flexible innovative system and serve the Posix elephant.

      IBM i, which actually predates POSIX by some amount, is somewhat popular in my circles as an example of “what could have been” regarding CLIs, alternative programming paradigms, etc. It has a functional POSIX layer via AIX emulation (named PASE).

      DOS and OS/2 had EMX which provided most of POSIX atop them. Mac OS 8/9 had GUSI for pthreads atop the horror show known as Multiprocessing Services. I’m pretty sure the Amiga had a POSIX layer. Stratus VOS. INTEGRITY. There are plenty of non-traditional, non-Unix platforms that are – at least mostly – POSIX conformant.

      What I’m saying is there is absolutely no technological reason you couldn’t slap a POSIX layer atop virtually anything, even if it wasn’t originally designed for it. Hell, I would even suggest you could go all-out and design this “flexible innovative system” and have someone else put a POSIX layer atop it. You inherit half the world’s software ecosystem for “free” with good enough emulation, and your native apps will run better and show the world why they should develop for that platform instead of against POSIX, right?

      But then, even Windows is giving up and making WSL2 a first-class citizen. This isn’t because of some weird conspiracy to make all platforms POSIX. It is because the POSIX paradigm has evolved, admittedly slowly in some cases, to provide a “good enough” layer on which you can build different platforms.

      And abandoning POSIX could also lead to a bunch of corporations making locked-in systems that are not interoperable. Let’s not forget the origins of X/Open and why this whole thing exists…

      APIs for managing threads and access to shared memory should be re-thought with defaults created for many-core systems

      Apple released libdispatch in 2011 with Snow Leopard under an Apache 2.0 license. It supports Mac OS, the BSDs, Linux, Solaris, and since 2017, Windows (using NT native blocks, even). I actually wrote an iOS app using GCD to process large XML API responses and found it did exactly what it was supposed to: on devices with more cores, more requests could be processed at once, making the system more responsive. At the same time, at least the UI thread didn’t lock up when your single-core 3GS was still churning through.

      And yet nobody uses libdispatch. Sometimes I hear “ew, Apple”, which may have been a bigger influence in 2011. Now, there’s really no excuse. I think it’s just inertia. And nobody wants to introduce more dependencies when you’re guaranteed POSIX and it works “good enough”.

      create systems that express in software the richness of modern hardware

      I think it should be the exact opposite. Software shouldn’t care about the hardware it is running on. It could be running on a Raspberry Pi Zero, or a z16. The reason POSIX has endured for this long is because it gives everyone a base platform to build more rich frameworks atop. Libraries like libdispatch are a good example of what can be built to take advantage of different scales of hardware without abandoning the thing that ensures we have an open standard that all systems are “guaranteed” to (mostly) follow.

      I might use this comment as the basis for an article on my own, and go into more detail about what I think POSIX gets right and wrong, and directions it could/should head.

      1. 25

        I might use this comment as the basis for an article on my own, and go into more detail about what I think POSIX gets right and wrong, and directions it could/should head.

        I’d love to read that!

      2. 11

        I agree with pretty much all of this.

        Relatedly, there is a misconception that has been around for years that Haiku, which I am one of the developers of, is “not a UNIX” or “only has POSIX compatibility non-‘natively’”. When this is corrected, some people are more than a little dismayed; they thought of Haiku as being “different” and “exotic” and are sad to discover that, under the hood, it’s less so than they imagined! (Or, often, it still is quite different and exotic; it’s just that “POSIX” means a whole lot less than most people may come to assume from Linux and the BSDs.)

        The review of Haiku’s latest release in The Register originally included this misconception, and I wound up in an extended argument (note especially the reply down-thread which talks about feelings) with the author of the article about it (and also in an exchange with the publication itself on Twitter.)

        1. 3

          Relatedly, there is a misconception that has been around for years that Haiku, which I am one of the developers of, is “not a UNIX”

          Isn’t that true? It’s not a descendent of BSD or SysV, nor has it ever been certified as a UNIX. If someone called Haiku a UNIX then they’d have to say the same about Linux, which would be clearly off. Even Windows NT4 was POSIX-compliant and I’ve never met anyone who considers Windows to be a UNIX variant.

          The review of Haiku’s latest release in The Register originally included this misconception, and I wound up in an extended argument (note especially the reply down-thread which talks about feelings) with the author of the article about it

          Hah, I had a similar (though briefer) exchange with the same author at https://news.ycombinator.com/item?id=34772982. I think that particular person just doesn’t have much interest in getting terminology correct before rushing their articles out the door.

          1. 3

            As I said on HN:

            Gee, thanks.

            This may come as an unpleasant revelation, but sometimes, just saying to someone “that isn’t right” is not going to change their mind. You didn’t even bother to reply to my comment on HN, so how you can call that an “exchange” puzzles me. You posted a negative critical comment, I replied, and you didn’t.

            Ah well. Your choice.

            No, I do not “just rush stuff out”, and in fact, I care a very great deal about terminology. I’ve been a professional writer for 28 years, have written for some 15 magazines and sites in a paid capacity, and have been a professional editor as well. It is not possible to keep working in such a business for so long if you are slapdash or slipshod about it.

            As for the technical stuff here:

            I disagree with @waddlesplash on this, and I disagree with you as well.

            I stand by my position on BeOS and Haiku: no, they are not Unixes, nor even especially Unix-like in their design. However, Haiku has a high degree of Unix compatibility – as does Windows, and it’s not a Unix either. OpenVMS and IBM z/OS also have high degrees of Unix compatibilty, and both have historically passed POSIX testing, meaning that they could, if they wished, brand as being “a UNIX”.

            Which is where my disagreement with your comment here comes in.

            Linux has passed the testing and as such it is a UNIX. Like it or not, it has won Open Group branding, and although none of the 2-3 vendors who’ve had it in the past still pay for the trademark, it did pass the test and thus it counts.

            No direct derivative of AT&T UNIX is still in active development any more.

            No BSD has ever sought the branding, but I am sure they easily could pass the test if they so wished. It would however be a waste of money.

            I would characterise Haiku the same as I would OpenVMS, z/OS and Windows NT: (via its native POSIX personality) a non-Unix-like OS, which does not resemble traditional Unix in design, in implementation, in its filesystem design or layout, or in its native APIs. However, all of them are highly UNIX compatible – about as UNIX compatible as it’s possible to be without actually being one. OpenVMS even used to have its own native X11 server, although I don’t think it’s maintained any more. Haiku, like RISC OS, has its own compatibility library allowing X11 apps to run and display in the native GUI without running a full X server.

            Linux is a UNIX-like design, implemented in the same language, conforming to the same spec, implementing the same APIs. Unlike Haiku, z/OS or OpenVMS, it has no other alternative native APIs or non-UNIX-like filesystems or anything else.

            Linux is a UNIX. By the current strict technical definition: it passed the Open Group tests which subsumed and replaced POSIX decades ago. And by a description: it’s a UNIX-like design built with Unix tools in the Unix preferred language, and nothing else.

            Haiku isn’t. It hasn’t passed testing, it isn’t Unix like in design, or implementation, or native APIs, or native functionality.

            The one that is arguable, to me, is none of the above.

            It’s macOS.

            macOS has a non-Unix-like kernel, derived from Mach, but with a big in-kernel UNIX server derived from BSD code. It has its own native non-Unix-like APIs, but they mostly sit on top of a UNIX-derived and highly UNIX-like layer. It has its own native GUI, which is non-UNIX-like, and its own native configuration database and much else, which are non-UNIX-like and implemented in non-UNIX-like languages.

            It doesn’t even have a case-sensitive filesystem, one of the lowest common denominators of Unix-like OSes.

            But, aside from its kernel, it’s highly UNIX-like until you get up to the filesystem layout and the GUI layer – all the UNIX directories are there, just mostly empty, or populated with stubs pointing the curious explorer to Netinfo and so on.

            For X11 apps, it does in fact run a whole X server based on X.org.

            But macOS has passed testing and Apple does pay for the trademark so, by the strict technical definition, it 100% is a UNIX™.

          2. 1

            If someone called Haiku a UNIX then they’d have to say the same about Linux, which would be clearly off.

            Well, there are people who say it about Linux. After all, POSIX is part of the “single UNIX specification”, so it is somewhat reasonable. But if people want to be consistent and not use the term for either Linux or Haiku, that’s fine by me. It’s using the term for only one and not both that I object to as inconsistent.

      3. 4

        libdispatch is kind of an ironic example. The APIs lends their implementations to heap allocations at every corner and thread explosion. Most of them could be addressed with intrusive memory and enforced asynchronous behavior at the API boundary.

        It’s like POSIX in a sense where it’s “good enough” for taking some advantage of various hardware configurations but doesn’t quite meet expectations on scalability or feature set for some applications. POSIX apis like pthread and select/poll, under this lens, also take advantage of hardware and are “good enough”.

        If that’s all that is required by the application then it’s fine, but lower/core components like schedulers, databases, runtimes, and those which provide the abstractions that people use over POSIX apis generally want to do as best they can. Only offering POSIX at the OS level limits this and I believe is why things like io_uring on linux, ulock on darwin, and even epoll/kqueue on both exists.

        Now these core components either try (pretty hard) to design APIs that work well across all of these extensions (including, and limiting-ly so, POSIX) or they just specialize to a specific platform. It’s too late the change now, but there’s more scalable API decisions for memory, IO and synchronization that POSIX could have adopted that could be built on-top of older POSIX apis, surprisingly looking to windows ntdll here for inspiration.

      4. 4

        What I’m saying is there is absolutely no technological reason you couldn’t slap a POSIX layer atop virtually anything, even if it wasn’t originally designed for it. Hell, I would even suggest you could go all-out and design this “flexible innovative system” and have someone else put a POSIX layer atop it.

        Well there’s at least one, and the article starts into this a little bit: That POSIX layer you’re talking about takes up space and CPU, so if you’re designing a small system (or even a “big” one optimised for cost or power efficiency) you might like to have that on the negotiating table.

        I heard a story about a chap who sold forth chips and every time he tried to break out they would ask for a POSIX demo. They eventually made one, and of course it was slow and made everything warm, so it didn’t help. Now if you know forth, this makes sense, but if you don’t know forth – and heck, clearly management didn’t either – you might not understand why you can’t have your cake and eat it too, so “slapping a POSIX layer atop” might even make sense. But forth systems are really different, really ideal if you can break your problem down into a bunch of little state machines, but it’s hard to sell that to someone whose problem is buying software.

        Years later, I worked for a company who sold databases, and a frequent complaint voiced by the market, at trade shows and in the press, was that they didn’t have an SQL layer, so they made one, but it really just handled the ODBC and some basic syntactic differences, like maybe it was brely SQL92 if you squinted, so the complaint continued to be heard in the market and the company made another SQL layer. When I joined they were starting the fourth or fifth version, and I’m like, this is just like the forth systems!

        But then, even Windows is giving up and making WSL2 a first-class citizen. This isn’t because of some weird conspiracy to make all platforms POSIX. It is because the POSIX paradigm has evolved

        This might be more to do with the value of Linux as opposed to POSIX. For many developers (maybe even most), Linux is hands-down the best development environment you can have, even if your target is Windows or Mac or tiny forth chips, and I don’t think it’s because of POSIX, or really any one thing, but I do think if something else had been better, Microsoft probably would have used that instead (or in addition-to: look at how they’re treating the web platform with edge!)

        That being said, I think POSIX was an important part of why Linux is successful: Once upon a time Linux was a pretty goofy system, and at that time a lot of patches were simply justified as compliance with POSIX, which rapidly expanded the suite of software Linux had access to. Having access to a pretty-good spec and standard meant people who ported programs to early-Linux fixed those problems in the right place (the kernel and/or libc) instead of adding another #ifdef __linux__

        1. 1

          That POSIX layer you’re talking about takes up space and CPU, so if you’re designing a small system (or even a “big” one optimised for cost or power efficiency) you might like to have that on the negotiating table.

          I can appreciate that. I focused on that because the article spent so much time waxing poetic about how it’s “hard” to find a computer with less than “tens of CPUs”. At that scale, it would be equally “hard” to justify not having a POSIX layer.

          A chip designed to run Forth would be quite an interesting system! I don’t know if I’ve ever heard about one. I know of LispMs, and some of the specialised hardware to accelerate FORTRAN once upon a time.

          they didn’t have an SQL layer, so they made one

          You can make an SQL layer atop pretty much any database, even non-relational ones, if you squint hard enough. I suppose it’s the same thing with POSIX layers. Not always the best idea, but the standards are generous enough in their allowances that it can be done.

          POSIX was an important part of why Linux is successful

          Yes. In the early days, it gained it a lot of software with little amount of porting. Now, it makes it easy to port workloads off other Unix platforms (like Solaris). In the future, it might just be the way that Next-New-OS bridges to bring Linux workloads to it.

          1. 2

            A chip designed to run Forth would be quite an interesting system! I don’t know if I’ve ever heard about one.

            These guys make forth chips, 144 “cpus” to a die, which is great for some applications, but POSIX is much too big to fit on even one of those chips.

            In the future, it might just be the way that Next-New-OS bridges to bring Linux workloads to it.

            Quite possibly we are seeing that right now with the “containerisation” fetish.

    6. 13

      I have no comments on the content, but I’m rolling my eyes at how the awful Twitter symptom of chopping up one’s text into little pieces is being brought over to Mastodon where it’s much less necessary.

      IIRC Mastodon still has a character limit, but it’s significantly higher (500 chars?) I guess the devs thought to themselves “Huh, for some reason Twitter has a really low character limit; they must have had a good reason for that so let’s have one too, only let’s not make it quite as terrible.” Which I would call cargo-cult architecture design.

      (Aren’t you glad lobste.rs didn’t force me to post this as two comments?)

      1. 6

        It’s a per-instance setting. Default in Mastodon is 500 characters (and the lead developer refuses to make it a parameter), the instance I’m on does 5000 characters, other fediverse software has other limits.

      2. 3

        Counterpoint: what’s wrong with having a permalink attached to a logical subset of a longer document, i.e. a couple of paragraphs? Way back there was a brief fad for “purple links”, small hypertext anchors pointing to each specific paragraph in a blog post. This made it easier for commentators etc. to specifically target a paragraph if they wished to engage with the argument or praise a turn or phrase.

        Same thing with Twitter/fedi threads. You can comment on a specific part, or a specific part can get called out in a “viral” manner.

        1. 1

          Counterpoint: what’s wrong with having a permalink attached to a logical subset of a longer document, i.e. a couple of paragraphs?

          The webpage it’s on usually has massive gaps between each paragraph, and messed up scrolling (scrolls per ‘logical subset’) that jumps me to the bottom or back to the top. A normal blog-type post doesn’t have this problem.

          Abstractly, this problem isn’t fundamental. Practically, who cares?

    7. 9

      I’ll note also Simon Phipps on OpenOffice.org’s use of a CLA: https://lwn.net/Articles/443989/

      While pragmatically there may be isolated circumstances under which CLAs are the lesser of evils, their role in OO.o has contributed more to its demise than offered it hope.

    8. 3

      Having had to set up outgoing email for work, I concur.

      (Incoming email is corporate GMail, which I wholeheartedly recommend.)

      Incredibly easy part: setting up Postfix on Ubuntu, sending SMTP over SSL. Absolutely the easiest thing ever.

      Near-impossible part: getting Google, Microsoft and Yahoo! to accept the emails, even with full SPF/DMARC/DKIM in place. This involved multiple supplicant emails sent to unresponsive inboxes at the providers in question.

      Short answer: spammers mean we can’t have nice things.

      (No, not sending email to the largest webmail providers was absolutely not a feasible option, and I’m appalled at how many geeks thought this was a sensible suggestion to make.)

      Email is three huge webmail companies and a few stragglers now. And I still get 200+ spam a day in my personal GMail.

    9. 4

      As someone who is building a new tool to generate ePubs right now, I feel your pain. At the same time, I’m happy that I’m not alone doing these kinds of spec juggling.

      1. 3

        I’m disconcerted that I’ve had nobody come along and say “you’re doing it wrong, use this tool to turn an ODT or DOCX into something that passes epubcheck first time, you foolish person.” Like, not even LO’s internal ePub export passes epubcheck. I thought this would be a sufficient nerdbait …

        1. 4

          This article is pretty old now but it suggests the author is using just Pandoc to create the EPUB. It’s not clear how much testing and validation they did on different ereader devices and programs, though.

          1. 1

            I went and tried pandoc (ver 2.5 from Ubuntu 20.04) and it did a mostly okay job from DOCX. It still messed up cross-reference endnotes, but everything does. And the output doesn’t pass epubcheck. Tralala!

        2. 4

          In my book, the more tools the better! It has been 15 minutes since a book built by my own tool passed epubcheck for the first time without any error or warning. I still need better markup for the cover and some way to generate a proper table of contents.

          I’m wishing you all the luck with your books!

    10. 6

      this one is weird:

      Calibre adds an <ol><li></ol> to every heading and subheading. Every ePub reader seems to handle this fine — except FBReader, my favoured ebook reader on Android, which displays a “1.” before each header.

      Solution: after you’ve unzipped the files, go through and remove every <ol></ol>, convert the <li></li> to <p></p> and remove the value= attribute from the <p> or else epubcheck complains.

      i considered poking at the fbreader source and seeing if i could fix it but thinking more about it it seems like fbreader is technically doing the right thing? i can’t figure out what the tags are for in the first place.

      1. 2

        Oh, arguably! But it’s also pretty much never what I actually want, and other ebook readers don’t do that?

        I didn’t even mention other stuff, like how I’m using zip -f because that way the “mimetype” file stays both uncompressed and first in the zip file - if you just get your files and zip them up, epubcheck isn’t happy with that either. ePub is weird and annoying.

        (edit: just adding that as a note!)

        1. 3

          yeah, i admit i don’t know much about epub generation; that line item just caught my interest because fbreader is my preferred epub reader too.

          edit: ugh, just saw it was no longer open source. so much for that, then!

            1. 2

              thanks, i’ll check it out!

            2. 2

              will check :-) Annoyingly, FBReader is still popular enough I should probably allow for it.

    11. 2

      This is awesome. Thank you!

    12. 9

      It is depressing to see how far smart contracts have fallen. The original smart-contract concept would have worked fine without blockchains. Indeed, there is something of a bifurcation in the object-capability world, with some folks (Agoric) being very convinced that blockchains and smart contracts are not just compatible but meant for each other, and the rest of us being convinced that they are obviously and horribly wrong.

      In capability theory, we draw a strong distinction between unguessable references vs. unforgeable references. An unguessable reference is a bitvector, and since it is plain data, it is only hard to know because it is hard to guess. However, an unforgeable reference is constructed by the surrounding environment and cannot be reduced to mere bits. Capability theory primarily relies on unforgeability for practical security; unguessability is an acceptable risk at the edges of the system which lets us weaken our proofs to be as strong as our cryptography and privacy permit.

      And from this perspective, hopefully it’s obvious what’s wrong: all data on blockchains is, at best, unguessable, and quite a bit of it is public. We can’t extend unforgeability across networks. This means that a blockchain can’t simply host an object-capability language in a way which is compatible with invoking unforgeable privately-held capabilities.

      I don’t think that anybody involved is acting in bad faith, but they certainly do seem blind to reason because of the potential for obtaining lots of money.

      1. 3

        You are much more generous than me, then! At first one could apply “never attribute to malice what can be explained by stupidity” to some of these things, but in this decade that’s really hard to continue doing.

        1. 6

          this is crypto, it’s always both.

          Szabo’s concept of smart contracts was way more nuanced than the reality we have now, where the number one use case of his Big Idea turned out to be unregistered penny stock scams (which he’s less than thrilled about). Szabo is a computer scientist, and fully understands the engineering problems that smart contracts entail. Unfortunately, his fans didn’t listen.

    13. 4

      Haven’t heard anything about the buttcoin circus in ages, I honestly kinda forgot about it. It’s kinda surreal that lots of people work with cryptocurrency as their day job, while I forget that it even exists.

      ERC-777 is an updated version of the ERC-20 standard

      LOL, did they just put a… casino-sounding number instead of a sequential one on a new spec?

      It’s an incorrect function name (optionIDs instead of optionsIDs). This function unlocks liquidity in expired contracts. If it doesn’t work, funds are just forever locked

      Wait, what? Does the compiler just proceed full steam ahead if you call a nonexistent function?? Is this PHP?

      Nah, that would be actually nuts, their official tweet is a bit wrong. Actually they used options.length instead of optionIDs.length while they had options defined in outer scope. Well, still, I’m not the only one who thought of PHP :D

      Why are people iterating over lists with indexes?! Is there no for..of / iterators type thing in Solidity?!?!

      Also now that I think about it, this reminds me even more of C++, where this-> is optional, so you can easily mistype a class member name instead of a similar sounding local variable of the same type.

      1. 8

        Putting millions of (pseudo?) dollars under the control of immutable code written in what is more or less JavaScript is utterly insane. There seems to be a large supply of people to whom that is not immediately obvious.

        1. 3

          Everyone who’s on the JS-hate-bandwagon latched on to this JS comparison, but it’s completely unfair.

          “Solidity was influenced by C++, Python and JavaScript”. All three have proper iteration: for (auto x : xs), for x in xs, for (let x of xs). The Solidity examples have stuff like for (uint p = 0; p < proposals.length; p++), which makes it most inspired by… well known super safe language C, haha.

          1. 8

            I’m told (by early Ethereum guys I can’t name) that they actually wanted to use just JavaScript with added built-ins, but couldn’t make it work. So Solidity is definitely a JavaScript descendant, and more from JS than anything else. And they were absolutely targeting middling JavaScript coders with Solidity.

            Ethereum and Solidity are IMO excellent examples of “worse is better” - if you’re going to do something as self-hobbling as smart contracts, then you’re going to want provable languages, preferably not Turing-complete, preferably functional.

            But middling programmers don’t cope well with that stuff. So, give ’em something easy to use, and add free money as an incentive! And it totally worked … if inelegantly.

            There are better languages for the Ethereum Virtual Machine - but Solidity is still the overwhelming favourite, because its popularity means you can apply standard state of the art software engineering principles to it: that is, cut’n’paste’n’bodge.

            1. 4

              As I occasionally say, you can tell the difference between a salesperson and an engineer this way: the salesperson solves the easiest problem first; the engineer solves the hardest problem first.

          2. 4

            Substitute C++ or Python in my comment and it still stands. I actually can’t think of any language I’d be comfortable doing this in, but it would at least have a theorem prover.

            1. 4

              Funnily enough, Solidity can use Why3 for verification. In 2017 the translation was “not even tested”, no idea what progress has been made.

              Either way, the human problem is still the bigger one. You can’t force everyone to verify. Especially if currently some don’t even test.

              1. 13

                Some serious PL people looked at this problem years ago but the Solidity language has no defined semantics and is very unprincipled as a language. Basically all these formal verification tools are some 5% finished work that some grad student who got tasked to “do formal verification” purely as a means to pump the value of some token, and then abandoned it when they realised the problem was intractable.

                From a larger perspective, effectively a 100% of these contracts and DeFi companies are either just gambling products or thinly veiled exit-scams so whether the code works or not isn’t really important so long as the right stakeholders get their winnings out before it blows up. David’s conclusion in the article is spot on.

      2. 1

        Actually they used options.length instead of optionIDs.length while they had options defined in outer scope.

        ah, thank you! I didn’t dive into the github to see if they’d described their “own” code correctly. Unsurprising, given they apparently just copied a pile of it. Correcting …

    14. 7

      Turing complete configuration languages shepherd you into writing complex logic with them, even though they are usually not particularly good programming environments.

      I think this one works more like: if doing the hard bit at that layer of the system would solve your problem, Turing completeness is fatally tempting.

      (that time I wrote several hundred lines of code in ant. kill me.)

    15. 2

      Working on a book about Facebook Libra - a really big dumb and bad idea in cryptocurrency. Resisting the urge to compile LibreOffice git HEAD yet again, one of my favourite displacement activities.

      Might stop down the local CEX and see about an 11” laptop - it’s time to go back to a netbook. A writing device that’s mine. And cheap enough that if it breaks, I can just get another one.

      I have a very powerful Lenovo X390 for work - 4-core/8-thread i7, 32GB RAM, compiles LO in 90 minutes! … and a fragile screen. I’m hell on laptops anyway, but I already broke the screen on this one previously, and I now regard this £1000 corporate beast machine as a fragile toy I don’t want to risk using.

      Probably one of the underpowered Windows 10 Chromebook competitors - I traveled a couple of weeks ago with the loved one’s Chromebook, and while I wholeheartedly recommend Chromebooks as travel laptops precisely because it’s not a disaster if it’s lost/stolen/broken, it turns out I really like having PgUp/PgDown keys.

      Though I’d accept a sufficiently ’l33t Chromebook. And then put GalliumOS on it.

      I thought I wanted a Surface Go - I am deeply impressed by a 10” tablet that is genuinely a proper PC, and apparently they Linux surprisingly well - but the form factor isn’t quite right for what I want, which is sitting on the couch and typing a lot. I want a laptop, not a tablet that sorta does being a laptop.

      And if I want literally today’s build of LibreOffice, I can build it on the work laptop and copy over the instdir :-D

      1. 1

        I ended up buying the Hypa Flux. I had an authentic early-2000s Linux experience with the crappy Realtek wifi, but I’m otherwise delighted with it. Review: https://reddragdiva.dreamwidth.org/608064.html

    16. 8

      This won’t come as a surprise to anyone working in enterprise software. There is an absolutely enormous cottage industry of consultancies selling these “blockchain” solutions to large companies when all they needed was a simple database. It’s metastasised so much that we’re seeing these second order solutions just trying to solve the original prviacy problems induced by the first order solutions. This is really the last ditch effort of IBM trying to maintain some semblance of relevance in the modern software era by creating completely unmaintainable back office software which is a loss leader for their Bluemix Cloud solution and which they can parasite off of for consulting the next decade.

      1. 1

        I was actually surprised not to see IBM’s name on this one.

        yeah, I fully expect there will be systems that sort of work better than not existing at all, with a gratuitous Hyperledger or private Ethereum somewhere in there that doesn’t actually do anything, and people like me will maintain these awful things for decades longer than is in any way reasonable. I think the ASX Blockchain project will end up one of these, for example.

        1. 3

          Microsoft and Ernst & Young are just playing the same game on top of this bizarre Ethereum thing they built to sell Azure products. It’s the same cargo-cultism masquerading as innovation to produce press releases. It’s all just pumped up by the public blockchain press because “any blockchain success makes number go up” and so the cycle of suffering continues for the poor programmers left to maintain this mess.

    17. 43

      I hope this makes more businesses realize that people can work from home 90% of the time for a great many positions. The amount of time saved, gas saved, and stress saved is immense….not to mention the amount saved on office space and associated costs.

      1. 6

        I’ve been working from home for over a week and I’ve been much happier.

        I just need to go for a walk around my neighborhood each day to at least leave the house. I never go for a walk when I go to the office. Its nice, I went around and took some photos on my Nikon FE2 today (been getting back into film recently)

        1. 7

          I got a dog to force me get out every day and it’s rewarding in many ways.

      2. 3

        I also hope this could be the case, but I think there’s also a possibility that it could have the opposite effect, owing to:

        1. Rushing into it without time to prepare and test remote-working infrastructure.
        2. Being forced to suddenly go all in, rather than easing themselves into it gradually by initially having some people working from home some of the time.

        If a company experiences problems because of it, they might be more likely to dismiss the possibility in future.

        1. 3

          Bram Cohen has a good Twitter thread about this - https://twitter.com/bramcohen/status/1235291382299926529 - “My office went full remote starting the beginning of this week related to covid-19 … This isn’t out of fear that going in to work is dangerous. It isn’t, at least not yet. It’s out of concern for not spreading disease and erring on the side of going full remote sooner rather than later.” Making sure you can strikes me as a good idea.

      3. 7

        If only I could work at McDonald’s from home. Sure would be nice if I could just receive a case of patties in the mail, cook them up, and mail them out. They have enough preservatives that it wouldn’t be an issue, right?

        1. 10

          There’s something that resonates about this. I wonder if these companies also encourage their data center engineers to work from home. Or even their cleaning and cafeteria staff. ‘Working from home’ requires an economic infrastructure that we expect to keep working, even though it requires people not to ‘work from home’.

          I’m absolutely sympathetic to the argument that not packing people together in tight spaces might, if we’re lucky, limit the spread of the virus. Maybe this is the wrong moment to wonder about the classist aspects of this.

          1. 31

            I think the idea of restricting workplace interaction gets a bit muddled in transmission.

            A pandemic of this kind is almost impossible to stop absent draconian quarantine practices.

            The point of getting (some) people not to take public transport, go to restaurants, hang out around the water cooler etc. is not to ensure that those people don’t get sick. A certain percentage of them will get sick, no matter what. The point is to slow the transmission, to flatten the curve of new illnesses, so that the existing care infrastructure can handle the inevitable illness cases without being overwhelmed.

          2. 6

            I’m not sure what is there about class. There are white collar jobs that can’t be remote, like doctors. And there are some blue collar ones that can, like customer support by phone.

            1. 11

              There are always exceptions. But in general, “knowledge work” is both paid higher, and also allows the employee greater flexibility in choosing their place of work.

              1. 2

                Agreed with this take, yes.

            2. 8

              Many middle class jobs in the United States provide very few paid sick days, let alone jobs held by the working class. Paid sick leave is a rarity for part time jobs.

              People who hold multiple part time jobs to survive will face the choice of going to work while sick or self-isolating and losing their income.

              There’s absolutely a class component to consider, especially in America where social safety nets are especially weak.

            3. 1

              While it’s true that not all doctors can work remotely and others can’t all the time, telemedicine is a significant and growing part of the medical profession. Turns out there’s a lot of medicine that does not require in-person presence.

        2. 0

          What do you think this comment possibly adds to the conversation except being snide?

    18. 5

      We are currently doing a mandatory WFH test run where our office is closed today and everyone is working from home. So far it’s been interesting, lots of zoom meetings (context: I work for Blend, our SF office is around 300 people).

      We are also allowing people to electively WFH for the foreseeable future.

      1. 10

        Go you! So many places are all “I guess we’ll deal with it when it happens”, but there’s nothing like a test run.

      2. 2

        Similar. I work from home normally though, but part of a big office in London. We’ve been rotating departments the last week, each department testing their team working from home. We’re in public health though so we’re trigger happy mitigating anything that will mess with business continuity.

      3. 2

        I’m in ops, and our WFH game is 100% cos we have to be able to do our entire job from a laptop as needed when on call. Our devs are not on call, but they are similarly well set-up.

        The rest of the company is being dragged into the present quite nicely! We have a shiny new OpenVPN setup that’s doing the job of connection.

        The sticking point is phone meetings - suitable software (RingCentral sorta sucks; Google Hangouts really sucks; Slack’s audio is way better, but only tech is on Slack), and having the people in the room give a damn about making sure the microphone can hear them.

        1. 3

          Zoom any good? Their name seems to crop up in this context a lot.

          1. 1

            Zoom has been excellent the few times I’ve used it this year – far above all other solutions I’ve experienced besides Blue Jeans (I’d say they’re tied for top of the heap).

          2. 1

            RingCentral is actually a fork of Zoom, I think. I’ve only used Zoom myself for podcast recording, and it was fine?

    19. 52

      nine-to-five-with-kids types

      God forbid people have children and actually want to spend time with them.

      As someone with kids, I’m reasonably sure I’m pretty good at what I do.

      1. 40

        The second statements includes a retraction of sorts:

        P.S. I have no problem with family people, and want to retract the offhand comment I made about them. I work with many awesome colleagues who happen to have children at home. What I really meant to say is that I don’t like people who see what we do as more of a job than a passion, and it feels like we have a lot of these people these days. Maybe everyone does, though, or maybe I’m just completely wrong.

        I disagree with them that it’s wrong to see your work as a job rather than a passion; in a system where it’s expected for everyone to have a job, it’s bullshit (though very beneficial for the capitalist class) to expect everyone to be passionate about what they’re forced to do everyday. Nonetheless, the retraction in the second statement is important context.

        1. 14

          Owning stock was supposed to fix this, rewarding employees for better work towards company growth. However the employees are somehow not oblivious to the fact that their work does not affect the stock price whatsoever, instead it being dependent mostly on news cycle about the company and their C-level execs (and large defense contracts in the case of Microsoft).

        2. 8

          Your appeal to our capitalist overlords here is incorrect. It’s a symptom of capitalism that you have people at work who are just punching in and don’t have a passion for it. If we weren’t bound by a capitalist system, people would work on whatever they’re passionate about instead of worrying about bringing home the bacon to buy more bacon with. Wanting people to be passionate about their work is a decidedly anti-capitalist sentiment, and wanting people to clock in, do their job, clock out and go home to maintain their nuclear family is a pretty capitalist idea.

          1. 2

            I agree with you.

            in a system where it’s expected for everyone to have a job, it’s bullshit (though very beneficial for the capitalist class) to expect everyone to be passionate about what they’re forced to do everyday.

            That’s true I think, undoubtedly. You can “fix” it by accepting people who aren’t passionate, or by replacing the system where people need a job to survive. I’d definitely prefer to fix it by replacing the system, but in any case, blaming 9-to-5-ers is wrong - they’re victims more than anything, forced to work a job they’re not passionate about instead of going out on whatever unprofitable venture captures their heart.

            1. 1

              or by replacing the system where people need a job to survive

              How do you propose we do that, without literally enslaving people? And without causing necessary but often necessarily low-passion jobs (like sorting freight in a courier company, at 1AM - one of my jobs as teenager) to simply not get done?

              I mean, what you’re really proposing is that some people get to go on ‘whatever unprofitable venture captures their heart’ while … what? Someone else labours to pay their way?

              I personally think that it’s entirely reasonable to accept people whose passion isn’t their job, provided they’re professional and productive. There’s nothing here to fix, really, beyond rejecting the (IMO) unrealistic idea that a good employee has to be passionate about their work.

          2. 2

            Every time we’ve tried an alternative at scale it’s led to mass murder, or mass starvation, or both.

            You’re also ignoring the fact that some jobs are just plain unpleasant. I’ve worked some in my youth. It’s reasonable not to be passionate about those; they’re either high paying because they’re unpleasant but skilled, or the lowest paying job because they’re work you do because your labour isn’t valuable in any other way.

      2. 9

        As another kind of beef against that line, in my experience, people working crazy long hours on a consistent basis generally aren’t doing it out of some kind of pure passion for the project, implementing a cool amazing design, etc. They’re usually pounding out ordinary boring features and business requirements, and are working long hours because of poor project management - unrealistic promises to customers, insufficient resources, having no idea what the requirements actually are but promising a delivery date anyways, etc. IMO, anyone with any wisdom should get out of that situation, whether or not they have kids.

        Also IME, those who do have genuine passion to build some cool new thing often don’t work long hours at all, or only do so for a short time period.

        1. 7

          Someone I know could easily be working for Google or such, but he works for a smaller, non-tech, local company as a developer, so he can spend his free time building the projects he likes (or the luxury of not having to at all), instead of some absurd scale megacorp thing which will drain all the free time he has. (Or specifically, build the great thing you want to, and not be subject to what say, Facebook wants.) IMHO, it’s pretty smart…

      3. 7

        The anonymous author apologizes about this specifically in their follow-up.

        1. 4

          The apologies sound like something between a sincere apology for just blindly ranting at it all and an apology for apologies’ sake. It looks to me like the author actually feels (felt) closer to the way he described in the first message then the second. Not all the way, but somewhat.

          1. 1

            The apology is also written in an entirely different style. It’s not clear there’s strong reason to assume it’s the same person.

            edit: reading through the comments, the blog’s owner says: “I won’t reveal how the anonymous poster contacted me, but I am 99.9% sure that unless his communication channel was hacked, he is the same person.” So, ok, it’s the same person.

      4. 11

        I know what people the author means though–not all these people have kids though, they’re just the ones who punch a clock from 9-5 and want to remain comfortable and don’t want push themselves to make great things. It’s not that these people want to make bad things, or don’t want to make great things, or aren’t capable of doing great work, they just don’t receive any self-fulfillment from work. I don’t blame these people for acting this way, I’d just rather not work with them.

        The best dev on my team (by a long shot) is a 9-5 worker who never finished college, but accomplishes more in one day than most devs accomplish in a week or more. He aspires to do incredible work during his time and he exceeds every day.

        1. 9

          Once organizations get to a certain threshold of complexity, though, you need to be thinking much more about the incentives that are offered rather than the micro-level of which people want to do excellent things. You have to make it easier to do the excellent thing than not, and that can be difficult, or, in an org the size of Microsoft’s Windows kernel groups, basically, impossible.

          1. 5

            The comment was directed at the author’s “9-5 with kids” comment since the author is referring to the contributions of those people. Organizational structure produces different effects on products based on the personalities and abilities of those people within the organization.

            In general, the bigger the organizational, the less risk-tolerance, but most actions incur some sort of risk. Also, staying with the crowd and doing what you’re told is safest, so inaction or repeating previous actions often becomes the default.

            1. 2

              Good point. It never makes sense to innovate and take risks in a large organization. If your gamble pays off, the pay off will invisibly pump up some vaguely connected numbers that will look good for some manager higher up. If it doesn’t pay off, it’s your fault. So, regardless of how good a gamble it was based on risk-reward ratio, it’s never favorable for you personally. Don’t expect your immediate superior to appreciate it either, even if the gamble pays off, because that person is also facing the same odds.

        2. 7

          As an anecdote, the eventually worst workplace I’ve been had a deeply ingrained culture of being passionate about work, and about being part of “a family” through the employer. It’s not entirely a healthy stance.

        3. 4

          I don’t blame these people for acting this way, I’d just rather not work with them.

          Why not?

          You said they want to make good things, great things, and can do great work. Why does someone else’s opinion of their job matter to you?

          1. 5

            It’s when people’s opinions affect the work they do. My point is they can do great work, but many don’t. Sometimes it manifests as symptom fixing rather than problem solving because it’s “whatever gets me out today by X, screw consequences later.” Other times it manifests as “I wasn’t told to do this, so I won’t”. When I’ve seen it, it’s often pervasive short term thinking over long term thinking.

    20. 2

      There’s a third tech critique of Libra: https://twitter.com/mcclure111/status/1142496050177105923

      currently an epic Twitter thread, the author is cleaning it up to make it a blog post at some point