What’s really funny to me is despite how people go “oh yeah, Apple’s going to be suffering from this new EU legislation”, they’re already mostly if not fully compliant. Usually by more than anyone else - software updates are the big one, as are parts availability. The battery one has specific carveouts that Apple devices at first glance, seem already compliant with.
If anything, this addresses a market failure. Apple hardware generally has way higher resale value than other manufacturers. Partly it’s because of cachet, but partly it’s because you can count on Apple to support the device for a long time. When our kid needed a new phone we get a refurbished iPhone 8 (same one I am using) and we could do that with the expectation it was going to be supported for at least a year or more.
Anecdotally I seem to remember buyers of high-end Google Pixel phones complaining that they don’t get any software updates after a certain time. I might be misremembering it though.
It’s easy for Apple because they get revenue from app sales and from iCloud and other attached services. Once you buy an iThing, Apple is best placed to sell you more things. Even if they sold the hardware at cost, they’d make money (just less). In contrast, Google makes money on Android phones after the initial sale but the manufacturer pays the cost of testing security fixes. Google has pulled a few high-risk things like the web view into the Google Play update mechanism but they still rely on manufactures a lot and there’s no incentive for manufacturers to do something that makes them no money and makes it less likely that someone will buy a new phone.
It’s easy for Apple because they get revenue from app sales and from iCloud and other attached services. Once you buy an iThing, Apple is best placed to sell you more things.
That’s exactly why I’m a bit bothered when they are trialed on their AppStore dominance… You don’t buy hardware, you buy a device part of an ecosystem.
Looks like the beginning of the end for the unnecessarey e-waste provoked by companies forcing obselence and anti-consumer patterns made possible by the lack of regulations.
It’s amazing that no matter how good the news is about a regulation you’ll always be able to find someone to complain about how it harms some hypothetical innovation.
Sure. Possibly that too - although I’d be mildly surprised if the legislation actually delivers the intended upside, as opposed to just delivering unintended consequences.
And just to be clear: the unintended consequences here include the retardation of an industry that’s delivered us progress from 8 bit micros with 64KiB RAM to pervasive Internet and pocket supercomputers in one generation.
Edited to add: I run a refurbished W540 with Linux Mint as a “gaming” laptop, a refurbished T470s with FreeBSD as my daily driver, a refurbished Pixel 3 with Lineage as my phone, and a PineTime and Pine Buds Pro. I really do grok the issues with the industry around planned obsolescence, waste, and consumer hostility.
I just still don’t think the cost of regulation is worth it.
I’m a EU citizen, and I see this argument made every single time the EU passes a new legislation affecting tech. So far, those worries never materialized.
I just can’t see why having removeable batteries would hinder innovation. Each company will still want to sell their prducts, so they will be pressed to find creative ways to have a sleek design while meeting regulations.
Do you think Apple engineers are not capeable of designing AirPods that have a removeable battery? The battery is even in the stem, so it could be as simple as having the stem be de-tacheable. It was just simpler to super-glue everything shut, plus it comes with the benefit of forcing consumers to upgrade once their AirPods have unusable battery life.
Also, if I’m not mistaken it is about service-time replaceable battery, not “drop-on-the-floor-and-your-phone-is-in-6-parts” replaceable as in the old times.
In the specific case of batteries, yep, you’re right. The legislation actually carves special exception for batteries that’s even more manufacturer-friendly than other requirements – you can make devices with batteries that can only be replaced in a workshop environment or by a person with basic repair training, or even restrict access to batteries to authorised partners. But you have to meet some battery quality criteria and a plausible commercial reason for restricting battery replacement or access to batteries (e.g. an IP42 or, respectively, IP67 rating).
Yes, I know, what about the extra regulatory burden: said battery quality criteria are just industry-standard rating methods (remaining capacity after 500 and 1,000 cycles) which battery suppliers already provide, so manufacturers that currently apply the CE rating don’t actually need to do anything new to be compliant. In fact the vast majority of devices on the EU market are already compliant, if anyone isn’t they really got tricked by whoever’s selling them the batteries.
The only additional requirements set in place is that fasteners have to be resupplied or reusable. Most fasteners that also perform electrical functions are inherently reusable (on account of being metallic) so in practice that just means, if your batteries are fastened with adhesive, you have to provide that (or a compatible) adhesive for the prescribed duration. As long as you keep making devices with adhesive-fastened batteries that’s basically free.
i.e. none of this requires any innovation of any kind – in fact the vast majority of companies active on the EU market can keep on doing exactly what they’re doing now modulo exclusive supply contracts (which they can actually keep if they want to, but then they have to provide the parts to authorised repair partners).
Man do I ever miss those days though. Device not powering off the way I’m telling it to? Can’t figure out how to get this alarm app to stop making noise in this crowded room? Fine - rip the battery cover off and forcibly end the noise. 100% success rate.
You’re enjoying those ubiquitous “This site uses cookies” pop-ups, then?
Do you think Apple engineers are not capeable of designing AirPods that have a removeable battery?
Of course they’re capable, but there are always trade-offs. I am very skeptical that something as tiny and densely packed as an AirPod could be made with removeable parts without becoming a lot less durable or reliable, and/or more expensive. Do you have the hardware/manufacturing expertise to back up your assumptions?
I don’t know where the battery is in an AirPod, but I do know that lithium-polymer batteries can be molded into arbitrary shapes and are often designed to fill the space around the other components, which tends to make them difficult or impossible to remove.
You’re enjoying those ubiquitous “This site uses cookies” pop-ups, then?
Those aren’t required by law; those happen when a company makes customer-hostile decisions and wants to deflect the blame to the EU for forcing them to be transparent about their bad decisions.
Huh? Using cookies is “user-hostile”? I mean, I actually remember using the web before cookies were a thing, and that was pretty user-unfriendly: all state had to be kept in the URL, and if you hit the Back button it reversed any state, like what you had in your shopping cart.
I can’t believe so many years later people still believe the cookie law applies to all cookies.
Please educate yourself: the law explicitly applies only to cookies used for tracking and marketing purposes, not for funcional purposes.
The law also specified that the banner must have a single button to “reject all cookies”, so any website that ask you to go trought a complex flow to reject your consent is not compliant.
It requires consent for all but “strictly necessary” cookies. According to the definitions on that page, that covers a lot more than tracking and marketing. For example “ choices you have made in the past, like what language you prefer”, or “statistics cookies” whose “ sole purpose is to improve website function”. Definitely overreach.
FWIW this regulation doesn’t apply to the Airpods. But if for some reason it ever did, and based on the teardown here, the main obstacle for compliance is that the battery is behind a membrane that would need to be destroyed. A replaceable fastener that would allow it to be vertically extracted, for example, would allow for cheap compliance. If Apple got their shit together and got a waterproof rating, I think they could actually claim compliance without doing anything else – it looks like the battery is already replaceable in a workshop environment (someone’s done it here) and you can still do that.
(But do note that I’m basing this off pictures, I never had a pair of AirPods – frankly I never understood their appeal)
Sure, Apple is capable of doing it. And unlike my PinePhone the result would be a working phone ;)
But the issue isn’t a technical one. It’s the costs involved in finding those creative ways, to hiring people to ensure compliance, and especially to new entrants to the field.
It’s demonstrably untrue that the costs never materialise. Speak to business owners about the cost of regulatory compliance sometime. Red tape is expensive.
Is that a trick question? The alternative is not regulating, and it’s delivered absolutely stunning results so far. Again: airgapped 8 bit desk toys to pocket supercomputers with pervasive Internet in a generation.
Edited to add: and this isn’t a new problem they’re dealing with; Apple has been pulling various customer-hostile shit moves since Jobs’ influence outgrew Woz’s:
But once again, Steve Jobs objected, because he didn’t like the idea of customers mucking with the innards of their computer. He would also rather have them buy a new 512K Mac instead of them buying more RAM from a third-party.
Edited to add, again: I mean this without snark, coming from a country (Australia) that despite its larrikin reuptation is astoundingly fond of red tape, regulation, conformity, and conservatism. But I think there’s a reason Silicon Valley is in America, and not either Europe or Australasia, and it’s cultural as much as it’s economic.
It’s not just that. Lots of people have studied this and one of the key reasons is that the USA has a large set of people with disposable income that all speaks the same language. There was a huge amount of tech innovation in the UK in the ’80s and ’90s (contemporaries of Apple, Microsoft, and so on) but very few companies made it to international success because their US competitors could sell to a market (at least) five times the size before they needed to deal with export rules or localisation. Most of these companies either went under because US companies had larger economies of scale or were bought by US companies.
The EU has a larger middle class than the USA now, I believe, but they speak over a dozen languages and expect products to be translated into their own locales. A French company doesn’t have to deal with export regulations to sell in Germany, but they do need to make sure that they translate everything (including things like changing decimal separators). And then, if they want to sell in Spain, they need to do all of that again. This might change in the next decade, since LLM-driven machine translation is starting to be actually usable (helped for the EU by the fact that the EU Parliament proceedings are professionally translated into all member states’ languages, giving a fantastic training corpus).
The thing that should worry American Exceptionalists is that the middle class in China is now about as large as the population of America and they all read the same language. A Chinese company has a much bigger advantage than a US company in this regard. They can sell to at least twice as many people with disposable income without dealing with export rules or localisation than a US company.
That’s one of the reasons but it’s clearly not sufficient. Other countries have spent up on taxpayer’s purse and not spawned a silicon valley of their own.
But they failed basically because of the Economic Calculation Problem - even with good funding and smart people, they couldn’t manufacture worth a damn.
Money - wherever it comes from - is an obvious prerequisite. But it’s not sufficient - you need a (somewhat at least) free economy and a consequently functional manufacturing capacity. And a culture that rewards, not kills or jails, intellectual independence.
Silicon Valley was born through the intersection of several contributing factors, including a skilled science research base housed in area universities, plentiful venture capital, permissive government regulation, and steady U.S. Department of Defense spending.
Government spending tends to help with these kind of things. As it did for the foundations of the Internet itself. Attributing most of the progress we had so far to lack of regulation is… unwarranted at best.
Besides, it’s not like anyone is advocating we go back in time and regulate the industry to prevent current problems without current insight. We have specific problems now that we could easily regulate without imposing too much a cost on manufacturers: there’s a battery? It must be replaceable by the end user. Device pairing prevents third party repairs? Just ban it. Or maybe keep it, but provide the tools to re-pair any new component. They’re using proprietary connectors? Consider standardising it all to USB-C or similar. It’s a game of whack-a-mole, but at least this way we don’t over-regulate.
Beware comrade, folks will come here to make a slippery slope arguments about how requiring battery replacements & other minor guard rails towards consumer-forward, e-waste-reducing design will lead to the regulation of everything & fully stifle all technological progress.
What I’d be more concerned is how those cabals weaponize the legislation in their favor by setting and/or creating the standards. I look at how the EU is saying all these chat apps need to quit that proprietary, non-cross-chatter behavior. Instead of reverting their code to the XMPP of yore, which is controlled by a third-party committee/community, that many of their chats were were designed after, they want to create a new standard together & will likely find a way to hit the minimum legal requirements while still keeping a majority of their service within the garden or only allow other big corporate players to adapt/use their protocol with a 2000-page specification with bugs, inconsistencies, & unspecified behavior.
’s a game of whack-a-mole, but at least this way we don’t over-regulate.
Whack enough moles and over-regulation is exactly what you get - a smothering weight of decades of incremental regulation that no-one fully comprehends.
One of the reason the tech industry can move as fast as it does is that it hasn’t yet had the time to accumulate this - or the endless procession of grifting consultants and unions that burden other industries.
It isn’t exactly what you get. You’re not here complaining about the fact that your mobile phone electrocutes you or gives you RF burns of stops your TV reception - because you don’t realise that there is already lots of regulation from which you benefit. This is just a bit more, not the straw-man binary you’re making out it to be.
I am curious however: do you see the current situation as tenable? You mention above that there are anti-consumerist practices and the like, but also express concern that regulation will quickly slippery slope away, but I am curious if you think the current system where there is more and more lock in both on the web and in devices can be pried back from those parties?
The alternative is not regulating, and it’s delivered absolutely stunning results so far.
Why are those results stunning? Is there any reason to think that those improvements were difficult in the first place?
There are a lot of economic incentives, and it was a new field of science application, that has benefited from so many other fields exploding at the same time.
It’s definitely not enough to attribute those results to the lack of regulation. The “utility function” might have just been especially ripe for optimization in that specific local area, with or without regulations.
Now, we see monopolies appearing again and associated anti-consumer decisions to the benefit of the bigger players. This situation is well-known – tragedy of the common situations in markets is never fixed by the players themselves.
Your alternative of not doing anything hinges on the hope that your ideologically biased opinion won’t clash with reality. It’s naive to believe corporations not to attempt to maximize their profits when they have an opportunity.
Is that a trick question? The alternative is not regulating, and it’s delivered absolutely stunning results so far. Again: airgapped 8 bit desk toys to pocket supercomputers with pervasive Internet in a generation.
This did not happen without regulation. The FCC exists for instance. All of the actual technological development was funded by the government, if not conducted directly by government agencies.
As a customer, I react to this by never voluntarily buying Apple products. And I did buy a Framework laptop when it first became available, which I still use. Regulations that help entrench Apple and make it harder for new companies like Framework to get started are bad for me and what I care about with consumer technology (note that Framework started in the US, rather than the EU, and that in general Europeans immigrate to the US to start technology companies rather than Americans immigrating to the EU to do the same).
As a customer, I react to this by never voluntarily buying Apple products.
Which is reasonable. Earlier albertorestifo spoke about legislation “forc[ing] their hand” which is a fair summary - it’s the use of force instead of voluntary association.
(Although I’d argue that anti-circumvention laws, etc. prescribing what owners can’t do with their devices is equally wrong, and should also not be a thing).
The problem with voluntary association is that most people don’t know what they’re associating with when they buy a new product. Or they think short term, only to cry later when repairing their device is more expensive than buying a new one.
There’s a similar tension at play with GitHub’s rollout of mandatory 2FA: it really annoys me, adding TOTP didn’t improve my security by one iota (I already use KeepassXC), but many people do use insecure passwords, and you can’t tell by looking at their code. (In this analogy GitHub plays the role of the regulator.)
The problem with voluntary association is that most people don’t know what they’re associating with when they buy a new product.
I mean, you’re not wrong. But don’t you feel like the solution isn’t to infantilise people by treating them like they’re incapable of knowing?
For what it’s worth I fully support legislation enforcing “plain $LANGUAGE” contracts. Fraud is a species of violence; people should understand what they’re signing.
But by the same token, if people don’t care to research the repair costs of their devices before buying them … why is that a problem that requires legislation?
But don’t you feel like the solution isn’t to infantilise people by treating them like they’re incapable of knowing?
They’re not, if we give them access to the information, and there are alternatives. If all the major phone manufacturers produce locked down phones with impossible to swap components (pairing), that are supported only for 1 year, what’s people to do? If people have no idea how secure the authentication of someone is on GitHub, how can they make an informed decision about security?
But by the same token, if people don’t care to research the repair costs of their devices before buying them
When important stuff like that is prominently displayed on the package, it does influence purchase decisions. So people do care. But more importantly, a bad score on that front makes manufacturers look bad enough that they would quickly change course and sell stuff that’s easier to repair, effectively giving people more choice. So yeah, a bit of legislation is warranted in my opinion.
But the issue isn’t a technical one. It’s the costs involved in finding those creative ways, to hiring people to ensure compliance, and especially to new entrants to the field.
I’m not a business owner in this field but I did work at the engineering (and then product management, for my sins) end of it for years. I can tell you that, at least back in 2016, when I last did any kind of electronics design:
Ensuring “additional” compliance is often a one-time cost. As an EE, you’re supposed to know these things and keep up with them, you don’t come up with a schematic like they taught you in school twenty years ago and hand it over to a compliance consultant to make it deployable today. If there’s a major regulatory change you maybe have to hire a consultant once. More often than not you already have one or more compliance consultants on your payroll, who know their way around these regulations long before they’re ratified (there’s a long adoption process), so it doesn’t really involve huge costs. The additional compliance testing required in this bill is pretty slim and much of it is on the mechanical side. That is definitely not one-time but trivially self-certifiable, and much of the testing time will likely be cut by having some of it done on the supplier end (for displays, case materials etc.) – where this kind of testing is already done, on a much wider scale and with a lot more parameters, so most partners will likely cover it cost-free 12 months from now (and in the next couple of weeks if you hurry), and in the meantime, they’ll do it for a nominal “not in the statement of work” fee that, unless you’re just rebranding OEM products, is already present on a dozen other requirements, too.
An embarrassing proportion of my job consisted not of finding creative ways to fit a removable battery, but in finding creative ways to keep a fixed battery in place while still ensuring adequate cooling and the like, and then in finding even more creative ways to design (and figure out the technological flow, help write the servicing manual, and help estimate logistics for) a device that had to be both testable and impossible to take apart. Designing and manufacturing unrepairable, logistically-restricted devices is very expensive, too, it’s just easier for companies to hide its costs because the general public doesn’t really understand how electronics are manufactured and what you have to do to get them to a shop near them.
The intrinsic difficulty of coming up with a good design isn’t a major barrier of entry for new players any more than it is for anyone. Rather, most of them can’t materialise radically better designs because they don’t have access to good suppliers and good manufacturing facilities – they lack contacts, and established suppliers and manufacturers are squirrely about working with them because they aren’t going to waste time on companies that are here today and they’re gone tomorrow. When I worked on regulated designs (e.g. medical) that had long-term support demands, that actually oiled some squeaky doors on the supply side, as third-party suppliers are equally happy selling parts to manufacturers or authorised servicing partners.
Execs will throw their hands in the air and declare anything super-expensive, especially if it requires them to put managers to work. They aren’t always wrong but in this particular case IMHO they are. The additional design-time costs this bill imposes are trivial, and at least some of them can be offset by costs you save elsewhere on the manufacturing chain. Also, well-ran marketing and logistics departments can turn many of its extra requirements into real opportunities.
I don’t want any of these things more than I want improved waterproofing. Why should every EU citizen that has the same priorities I do not be able to buy a the device they want?
The law doesn’t prohibit waterproof devices. In fact, it makes clear expections for such cases. It mandates that the battery must be repleaceable without specialized tools and by any competent shop, it doesn’t mandate a user-replaceable battery.
And just to be clear: the unintended consequences here include the retardation of an industry that’s delivered us progress from 8 bit micros with 64KiB RAM to pervasive Internet and pocket supercomputers in one generation.
I don’t want to defend the bill (I’m skeptical of politicians making decisions on… just about anything, given how they operate) but I don’t think recourse to history is entirely justified in this case.
For one thing, good repairability and support for most of (if not throughout) a device’s useful lifetime was the norm for a good part of that period, and it wasn’t a hardware-only deal. Windows 3.1 was supported until 2001, almost twice longer than the bill demands. NT 3.1 was supported for seven years, and Windows 95 was supported for 6. IRIX versions were supported for 5 (or 7?) years, IIRC.
For another, the current state of affairs is the exact opposite of what deregulation was supposed to achieve, so I find it equally indefensible on (de)regulatory grounds alone. Manufacturers are increasingly convincing users to upgrade not by delivering better and more capable products, but by making them both less durable and harder to repair, and by restricting access to security updates. Instead of allowing businesses to focus on their customers’ needs rather than state-mandated demands, it’s allowing businesses to compensate their inability to meet customer expectations (in terms of device lifetime and justified update threshold) by delivering worse designs.
I’m not against that on principle but I’m also not a fan of footing the bill for all the extra waste collection effort and all the health hazards that generates. Private companies should be more than well aware that there’s no such thing as a free lunch.
For one thing, good repairability and support for most of (if not throughout) a device’s useful lifetime was the norm for a good part of that period
Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.
For another, the current state of affairs is the exact opposite of what deregulation was supposed to achieve
Deregulation is the “ground state”.
It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.
Conveniently, it fosters an environment of rapid growth in wealth, capability, and efficiency. Because when companies do what you’re suggesting - nerfing their products to drive revenue - customers go elsewhere.
Which is why you’ll see the greatest proponents of regulation are the companies themselves, these days. Anti-circumvention laws, censorship laws that are only workable by large companies, Government-mandated software (e.g. Korean banking, Android and iOS only identity apps in Australia) and so forth are regulation aimed against customers.
So there’s a part of me that thinks companies are reaping what they sowed, here. But two wrongs don’t make a right; the correct answer is to deregulate both ends.
Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.
Maybe. Most early home computers were expensive. People expected them to last a long time. In the late ’80s, most of the computers that friends of mine owned were several years old and lasted for years. The BBC Model B was introduced in 1981 and was still being sold in the early ‘90s. Schools were gradually phasing them out. Things like the Commodore 64 of Sinclair Spectrum had similar longevity. There were outliers but most of them were from companies that went out of business and so wouldn’t be affected by this kind of regulation.
It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.
That’s not really true. It assumes a balance of power that is exactly equal between companies and consumers.
Companies force people to upgrade by tying in services to the device and then dropping support in the services for older products. No one buys a phone because they want a shiny bit of plastic with a thinking rock inside, they buy a phone to be able to run programs that accomplish specific things. If you can’t safely connect the device to the Internet and it won’t run the latest apps (which are required to connect to specific services) because the OS is out of date, then they need to upgrade the OS. If they can’t upgrade the OS because the vendor doesn’t provide an upgrade and no one else can because they have locked down the bootloader (and / or not documented any of the device interfaces), then consumers have no choice to upgrade.
Conveniently, it fosters an environment of rapid growth in wealth, capability, and efficiency. Because when companies do what you’re suggesting - nerfing their products to drive revenue - customers go elsewhere.
Only if there’s another option. Apple controls their app store and so gets a 30% cut of app revenue. This gives them some incentive to support old devices, because they can still make money from them, but they will look carefully at the inflection point where they make more money from upgrades than from sales to older devices. For other vendors, Google makes money from the app store and they don’t[1] and so once a handset has shipped, the vendor has made as much money as they possibly can. If a vendor makes a phone that gets updates longer, then it will cost more. Customers don’t see that at point of sale, so they don’t buy it. I haven’t read the final version of this law, one of the drafts required labelling the support lifetime (which research has shown will have a big impact - it has a surprisingly large impact on purchasing decisions). By moving the baseline up for everyone, companies don’t lose out by being the one vendor to try to do better.
Economists have studied this kind of market failure for a long time and no one who actually does research in economics (i.e. making predictions and trying to falsify them, not going on talk shows) has seriously proposed deregulation as the solution for decades.
Economies are complex systems. Even Adam Smith didn’t think that a model with a complete lack of regulation would lead to the best outcomes.
[1] Some years ago, the Android security team was complaining about the difficulties of support across vendors. I suggested that Google could fix the incentives in their ecosystem by providing a 5% cut of all app sales to the handset maker, conditional on the phone running the latest version of Android. They didn’t want to do that because Google maximising revenue is more important than security for users.
Economists have studied this kind of market failure for a long time and no one who actually does research in economics (i.e. making predictions and trying to falsify them, not going on talk shows) has seriously proposed deregulation as the solution for decades.
Is the school of economics you’re talking about actual experimenters, or are they arm-chair philosophers? I trust they propose what you say they propose, but what actual evidence do they have?
I might sound like I’m dismissing an entire scientific discipline, but economics have shown strong signs of being extremely problematic on this front for a long time. One big red flag for instance is the existence of such long lived “schools”, which are a sign of dogma more than they’re a sign of sincere inquiry.
In fact, they dismiss the entire concept of market failure, because markets exist to provide pricing and a means of exchange, nothing more.
Assuming there’s no major misunderstanding, there’s another red flag right there: markets have a purpose now? Describing what markets do is one thing, but ascribing purpose to them presupposes some sentient entity put them there with intent. Which may very well be true, but then I would ask a historian, not an economist.
Now looking at the actual purpose… the second people exchange stuff for a price, there’s a pricing and a means of exchange. Those are the conditions for a market. Turning it around and making them the “purpose” of market is cheating: in effect, this is saying markets can’t fail by definition, which is quite unhelpful.
I might sound like I’m dismissing an entire scientific discipline, but economics have shown strong signs of being extremely problematic on this front for a long time.
This is why I specifically said practicing economists who make predictions. If you actually talk to people who do research in this area, you’ll find that they’re a very evidence-driven social science. The people at the top of the field are making falsifiable predictions based on models and refining their models when they’re wrong.
Economics is intrinsically linked to politics and philosophy. Economic models are like any other model: they predict what will happen if you change nothing or change something, so that you can see whether that fits with your desired outcomes. This is why it’s so often linked to politics and philosophy: Philosophy and politics define policy goals, economics lets you reason about whether particular actions (or inactions) will help you reach those goals. Mechanics is linked to engineering in the same way. Mechanics tells you whether a set of materials arranged in a particular way will be stable, engineering says ‘okay, we want to build a bridge’ and then uses models from mechanics to determine whether the bridge will fall down. In both cases, measurement errors or invalid assumptions can result in the goals not being met when the models say that they should be and in both cases these lead to refinements of the models.
One big red flag for instance is the existence of such long lived “schools”, which are a sign of dogma more than they’re a sign of sincere inquiry.
To people working in the field, the schools are just shorthand ways of describing a set of tools that you can use in various contexts.
Unfortunately, most of the time you hear about economics, it’s not from economists, it’s from people who play economists on TV. The likes of the Cato and Mises institutes in the article, for example, work exactly the wrong way around: they decide what policies they want to see applied and then try to tweak their models to justify those policies, rather than looking at what goals they want to see achieved and using the models to work out what policies will achieve those goals.
I really would recommend talking to economists, they tend to be very interesting people. And they hate the TV economists with a passion that I’ve rarely seen anywhere else.
Assuming there’s no major misunderstanding, there’s another red flag right there: markets have a purpose now?
Markets absolutely have a purpose. It is always a policy decision whether to allow a market to exist. Markets are a tool that you can use to optimise production to meet demand in various ways. You can avoid markets entirely in a planned economy (but please don’t, the Great Leap Forward or the early days of the USSR give you a good idea of how many people will die if you do). Something that starts as a market can end up not functioning as a market if there’s a significant power imbalance between producers and consumers.
Markets are one of the most effective tools that we have for optimising production for requirements. Precisely what they will optimise for depends a lot on the shape of the market and that’s something that you can control with regulation. The EU labelling rules on energy efficiency are a great example here. The EU mandated that white goods carry labels showing the score that they got on energy-efficiency tests. The labelling added information to customer and influenced their purchasing decisions. This created demand for more energy-efficient goods and the market responded by providing them. The eventually regulations banned goods below a certain efficiency rating but it was largely unnecessary because the market adjusted and most things were A rated or above when F ratings were introduced. It worked so well that they had to recalibrate the scale.
Unfortunately, most of the time you hear about economics, it’s not from economists, it’s from people who play economists on TV
I can see how such usurpation could distort my view.
Markets absolutely have a purpose. It is always a policy decision whether to allow a market to exist.
Well… yeah.
Precisely what [markets] will optimise for depends a lot on the shape of the market and that’s something that you can control with regulation. The EU labelling rules on energy efficiency are a great example here.
I love this example. Plainly shows that often people don’t make the choices they do because they don’t care about such and such criterion, they do so because they just can’t measure the criterion even if they cared. Even a Libertarian should admit that making good purchase decisions requires being well informed.
You can avoid markets entirely in a planned economy (but please don’t, the Great Leap Forward or the early days of the USSR give you a good idea of how many people will die if you do).
To be honest I do believe some select parts of the economy should be either centrally planned or have a state provider that can serve everyone: roads, trains, water, electricity, schools… Yet at the same time, other sectors probably benefit more from a Libertarian approach. My favourite example is the Internet: the fibre should be installed by public instances (town, county, state…), and bandwidth rented at a flat rate — no discount for bigger volumes. And then you just let private operators rent the bandwidth however they please, and compete among each other. The observed results in the few places in France that followed this plan (mostly rural areas big private providers didn’t want to invest in) was a myriad of operators of all sizes, including for-profit and non-profit ones (recalling what Benjamin Bayart said of the top of my head). This gave people an actual choice, and this diversity inherently makes this corner of the internet less controllable and freer.
A Libertarian market on top of a Communist infrastructure. I suspect we can find analogues in many other domains.
My favourite example is the Internet: the fibre should be installed by public instances (town, county, state…), and bandwidth rented at a flat rate — no discount for bigger volumes. And then you just let private operators rent the bandwidth however they please, and compete among each other.
This is great initially, but it’s not clear how you pay for upgrades. Presumably 1 Gb/s fibre is fine now, but at some point you’re going to want to migrate everyone to 10 Gb/s or faster, just as you wanted to upgrade from copper to fibre. That’s going to be capital investment. Does it come from general taxation or from revenue raised on the operators? If it’s the former, how do you ensure it’s equitable, if it’s the latter then you’re going to want to amortise the cost across a decade and so pricing sufficiently that you can both maintain the current infrastructure and save enough to upgrade to as-yet-unknown future technology can be tricky.
The problem with private ownership of utilities is that it encourages rent seeking and cutting costs at the expense of service and capital investment. The problem with public ownership is that it’s hard to incentivise efficiency improvements. It’s important to understand the failure modes of both options and ideally design hybrids that avoid the worst problems of both. The problem is that most politicians start with ‘privatisation is good’ or ‘privatisation is bad’ as an ideological view and not ‘good service, without discrimination, at an affordable price is good’ and then try to figure out how to achieve it.
Yes, that’s the point: the most capitalistic something is (extreme example: nuclear power plants), the more difficult private enterprises will want to invest in it, and if they do, the more they will want to extract rent from their investment. There’s also the thing about fibre (or copper) being naturally monopolistic, at least if you have a mind to conserve resources and not duplicate lines all over the place.
So there is a point where people must want the thing badly enough that the town/county/state does the investment itself. As it does for any public infrastructure.
Not saying this would be easy though. The difficulties you foresee are spot on.
The problem with public ownership is that it’s hard to incentivise efficiency improvements.
Ah, I see. Part of this can be solved by making sure the public part is stable, and the private part easy to invest on. For instance, we need boxes and transmitters and whatnot to lighten up the fibre. I speculate that those boxes are more liable to be improved than the fibre itself, so perhaps we could give them to private interests. But this is reaching the limits of my knowledge of the subject, I’m not informed enough to have an opinion on where the public/private frontier is best placed.
The problem is that most politicians start with ‘privatisation is good’ or ‘privatisation is bad’ as an ideological view and not ‘good service, without discrimination, at an affordable price is good’ and then try to figure out how to achieve it.
Yes, that’s the point: the most capitalistic something is (extreme example: nuclear power plants), the more difficult private enterprises will want to invest in it, and if they do, the more they will want to extract rent from their investment
There’s a lot of nuance here. Private enterprise is quite good at high-risk investments in general (nuclear power less so because it’s regulated such that you can’t just go bankrupt and walk away, for good reasons). A lot of interesting infrastructure were possible because private investors gambled and a lot of them lost a big pile of money. For example, the Iridium satellite phone network cost a lot to deliver and did not recoup costs. The initial investors lost money, but then the infrastructure was for sale at a bargain price and so it ended up being operated successfully. It’s not clear to me how public investment could have matched that (without just throwing away tax payers’ money).
This was the idea behind some of the public-private partnership things that the UK government pushed in the ‘90s (which often didn’t work, you can read a lot of detailed analyses of why not if you search for them): you allow the private sector to take the risk and they get a chunk of the rewards if the risk pays off but the public sector doesn’t lose out if the risk fails. For example, you get a private company to build a building that you will lease from them. They pay all of the costs. If you don’t need the building in five years time then it’s their responsibility to find another tenant. If the building needs unexpected repairs, they pay for them. If everything goes according to plan, you pay a bit more for the building space than if you’d built, owned, and operated it yourself. And you open it out to competitive bids, so if someone can deliver at a lower cost than you could, you save money.
Some procurement processes have added variations on this where the contract goes to the second lowest bidder or they the winner gets paid what the next-lowest bidder asked for. The former disincentivises stupidly low bids (if you’re lower than everyone else, you don’t win), the latter ensures that you get paid as much as someone else thought they could deliver, reducing risk to the buyer. There are a lot of variations on this that are differently effective and some economists have put a lot of effort into studying them. Their insights, sadly, are rarely used.
So there is a point where people must want the thing badly enough that the town/county/state does the investment itself. As it does for any public infrastructure.
The dangerous potholes throughout UK roads might warn you that this doesn’t always work.
A lot of interesting infrastructure were possible because private investors gambled and a lot of them lost a big pile of money.
Good point. We need to make sure that these gambles stay gambles, and not, say, save the people who made the bad choice. Save their company perhaps, but seize it in the process. We don’t want to share losses while keeping profits private — which is what happens more often than I’d like.
This was the idea behind some of the public-private partnership things that the UK government pushed in the ‘90s (which often didn’t work, you can read a lot of detailed analyses of why not if you search for them)
The intent is good indeed, and I do have an example of a failure in mind: water management in France. Much of it is under a private-public partnership, with Veolia I believe, and… well there are a lot of leaks, a crapton of water is wasted (up to 25% in some of the worst cases), and Veolia seems to be making little more than a token effort to fix the damn leaks. Probably because they don’t really pay for the loss.
The dangerous potholes throughout UK roads might warn you that this doesn’t always work.
It’s often a matter oh how much money you want to put in. Public French roads are quite good, even if we exclude the super highways (those are mostly privatised, and I reckon in even better shape). Still, point taken.
The EU labelling rules on energy efficiency are a great example here. The EU mandated that white goods carry labels showing the score that they got on energy-efficiency tests. The labelling added information to customer and influenced their purchasing decisions. This created demand for more energy-efficient goods and the market responded by providing them.
Were they actually successful, or did they only decrease operating energy use? You can make a device that uses less power because it lasts half as long before it breaks, but then you have to spend twice as much power manufacturing the things because they only last half as long.
I don’t disagree with your comment, by the way. Although, part of the problem with planned economies was that they just didn’t have the processing power to manage the entire economy; modern computers might make a significant difference, the only way to really find out would be to set up a Great Leap Forward in the 21st century.
Were they actually successful, or did they only decrease operating energy use?
I may be misunderstanding your question but energy ratings aren’t based on energy consumption across the device’s entire lifetime, they’re based on energy consumption over a cycle of operation of limited duration, or a set of cycles of operations of limited duration (e.g. a number of hours of functioning at peak luminance for displays, a washing-drying cycle for washer-driers etc.). You can’t get a better rating by making a device that lasts half as long.
Energy ratings and device lifetimes aren’t generally linked by any causal relation. There are studies that suggest the average lifetime for (at least some categories of) household appliances have been decreasing in the last decades, but they show about the same thing regardless of jurisdiction (i.e. even those without labeling or energy efficiency rules, or with different labeling rules) and it’s a trend that started prior to energy efficiency labeling legislation in the EU.
You can’t get a better rating by making a device that lasts half as long.
Not directly, but you can e.g. make moving parts lighter/thinner, so they take less power to move but break sooner as a result of them being thinner.
but they show about the same thing regardless of jurisdiction (i.e. even those without labeling or energy efficiency rules, or with different labeling rules) and it’s a trend that started prior to energy efficiency labeling legislation in the EU.
Not directly, but you can e.g. make moving parts lighter/thinner, so they take less power to move but break sooner as a result of them being thinner.
For household appliances, energy ratings are given based on performance under full rated capacities. Moving parts account for a tiny fraction of that in washing machines and washer-driers, and for a very small proportion of the total operating power in dishwashers and refrigerators (and obviously no proportion for electronic displays and lighting sources). They’re also given based on measurements of KWh/cycle rounded to three decimal places.
I’m not saying making some parts lighter doesn’t have an effect for some of the appliances that get energy ratings, but that effect is so close to the rounding error that I doubt anyone is going to risk their warranty figures for it. Lighter parts aren’t necessarily less durable, so if someone’s trying to get a desired rating by lightening the nominal load, they can usually get the same MTTF with slightly better materials, and they’ll gladly swallow some (often all) of the upfront cost just to avoid dealing with added uncertainty of warranty stocks.
Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.
The major problem with orphans was lack of access to proprietary parts – they were otherwise very repairable. The few manufacturers that can afford proprietary parts today (e.g. Apple) aren’t exactly at risk of going under, which is why that fear is all but gone today.
I have like half a dozen orphan boxes in my collection. Some of them were never sold on Western markets, I’m talking things like devices sold only on the Japanese market for a few years or Soviet ZX Spectrum clones. All of them are repairable even today, some of them even with original parts (except, of course, for the proprietary ones, which aren’t manufactured anymore so you can only get them from existing stocks, or use clone parts). It’s pretty ridiculous that I can repair thirty year-old hardware just fine but if my Macbook croaks, I’m good for a new one, and not because I don’t have (access to) equipment but because I can’t get the parts, and not because they’re not manufactured anymore but because no one will sell them to me.
It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.
Deregulation was certainly meant to achieve a lot of things in particular. Not just general outcomes, like a more competitive landscape and the like – every major piece of deregulatory legilslation has had concrete goals that it sought to achieve. Most of them actually achieved them in the short run – it was conserving these achievements that turned out to be more problematic.
As for companies not being able to force customers not to upgrade, repair or tinker with their devices, that is really not true. Companies absolutely can and do force customers to not upgrade or repair their devices. For example, they regularly use exclusive supply deals to ensure that customers can’t get the parts they need for it, which they can do without leveraging any government-mandated regulation.
Some of their means are regulation-based – e.g. they take them customers or third-parties to court (see e.g. Apple. For most devices, tinkering with them in unsupported ways is against the ToS, too, and while there’s always doubt on how much of that is legally enforceable in each jurisdiction out there, it still carries legal risk, in addition to the weight of force in jurisdictions where such provisions have actually been enforced.
This is very far from a state of minimal initiation of force. It’s a state of minimal initiation of force on the customer end, sure – customers have little financial power (both individually and in numbers, given how expensive organisation is), so in the absence of regulation they can leverage, they have no force to initiate. But companies have considerable resources of force at their disposal.
It’s not like there was heavy progress the last 10 years on smartphone hardware.
Since 2015 every smartphone is the same as the previous model, with a slightly better camera and a better chip. I don’t see how the regulation is making progress more difficult. IMHO it will drive innovation, phones will have to be made more durable.
And, for most consumers, the better camera is the only thing that they notice. An iPhone 8 is still massively overpowered for what a huge number of consumers need, and it was released five years ago. If anything, I think five years is far too short a time to demand support.
Until that user wants to play a mobile game–in which like PC hardware specs were propelled by gaming, so is the mobile market driven by games which I believe is now the most dominant gaming platform.
I don’t think the games are really that CPU / GPU intensive. It’s definitely the dominant gaming platform, but the best selling games are things like Candy Crush (which I admit to having spent far too much time playing). I just upgraded my 2015 iPad Pro and it was fine for all of the games that I tried from the app store (including the ones included with Netflix and a number of the top-ten ones). The only thing it struggled with was the Apple News app, which seems to want to preload vast numbers of articles and so ran out of memory (it had only 2 GiB - the iPhone version seems not to have this problem).
The iPhone 8 (five years old) has an SoC that’s two generations newer than my old iPad, has more than twice as much L2 cache, two high-performance cores that are faster than the two cores in mine (plus four energy-efficient cores, so games can have 100% use of the high-perf ones), and a much more powerful GPU (Apple in-house design replacing a licensed PowerVR one in my device). Anything that runs on my old iPad will barely warm up the CPU/GPU on an iPhone 8.
I don’t think the games are really that CPU / GPU intensive
But a lot are intensive & enthusiasts often prefer it. But still those time-waster types and e-sports tend to run on potatoes to grab the largest audience.
Anecdotally, I recently was reunited with my OnePlus 1 (2014) running Lineage OS, & it was choppy at just about everything (this was using the apps from when I last used it (2017) in airplane mode so not just contemporary bloat) especially loading map tiles on OSM. I tried Ubuntu Touch on it this year (2023) (listed as great support) & was still laggy enough that I’d prefer not to use it as it couldn’t handle maps well. But even if not performance bottle-necked, efficiency is certainly better (highly doubt it’d save more energy than the cost of just keeping an old device, but still).
My OnePlus 5T had an unfortunate encounter with a washing machine and tumble dryer, so now the cellular interface doesn’t work (everything else does). The 5T replaced a first-gen Moto G (which was working fine except that the external speaker didn’t work so I couldn’t hear it ring. I considered that a feature, but others disagreed). The Moto G was slow by the end. Drawing maps took a while, for example. The 5T was fine and I’d still be using it if I hadn’t thrown it in the wash. It has an 8-core CPU, 8 GiB of RAM, and an Adreno 540 GPU - that’s pretty good in comparison to the laptop that I was using until very recently.
I replaced the 5T with a 9 Pro. I honestly can’t tell the difference in performance for anything that I do. The 9 Pro is 4 years newer and doesn’t feel any faster for any of the apps or games that I run (and I used it a reasonable amount for work, with Teams, Word, and PowerPoint, which are not exactly light apps on any platform). Apparently the GPU is faster and the CPU has some faster cores but I rarely see anything that suggests that they’re heavily loaded.
Original comment mentioned iPhone 8 specifically. Android situation is completely different.
Apple had a significant performance lead for a while. Qualcomm just doesn’t seem to be interested in making high-end chips. They just keep promising that their next-year flagship will be almost as fast as Apple’s previous-year baseline. Additionally there are tons of budget Mediatek Androids that are awfully underpowered even when new.
Flagship Qualcomm chips for Android chips been fine for years & more than competitive once you factor in cost. I would doubt anyone is buying into either platform purely based on performance numbers anyhow versus ecosystem and/or wanting hardware options not offered by one or the other.
Those are some cherry-picked comparisons. Apple release on a different cadence. You check right now, & S23 beats up on it as do most flagship now. If you blur the timing, it’s all about the same.
With phones of the same tier released before & after you can see benchmarks are all close as is battery life. Features are wildly different tho since Android can offer a range of different hardware.
I think you’re really discounting the experiences of consumers to say they don’t notice the UI and UX changes made possible on the Android platform by improvements in hardware capabilities.
I notice that you’re not naming any. Elsewhere in the thread, I pointed out that I can’t tell the difference between a OnePlus 5T and a 9 Pro, in spite of them being years apart in releases. They can run the same version of Android and the UIs seem identical to me.
I didn’t think I had to. Android 9, 10, 11, 12 have distinct visual styles, and between vendors this distinction can further - this may be less apparent on OnePlus as they use their own OxygenOS (AOSP upstream ofc) (or at least, used to), but consumers notice even if they can’t clearly relate what they’ve noticed.
I’m using LimeageOS and both phones are running updated versions of the OS. Each version has made the settings app more awful but I can’t point to anything that’s a better UI or anything that requires newer hardware. Rendering the UI barely wakes up the GPU on the older phone. So what is new, better, and is enabled by newer hardware?
I can’t argue either way for “better”, I’m not the market. Newer hardware generally has better capability for graphics processing, leading to more reactive displays at higher refresh rates, and enabling compositing settings and features that otherwise wouldn’t run at an acceptable frame rate.
LineageOS is an AOSP build specifically designed to run fast and support legacy hardware, and is designed to look the same on all that hardware. It’s not a fair comparison to what people like to see with smartphone interfaces and launchers etc.
I can’t argue either way for “better”, I’m not the market. Newer hardware generally has better capability for graphics processing, leading to more reactive displays at higher refresh rates, and enabling compositing settings and features that otherwise wouldn’t run at an acceptable frame rate.
So please name one of them. A 2017 phone can happily run a 1080p display at a fast enough refresh that I’ve no idea what it is because it’s faster than my eyes can detect, with a full compositing UI. Mobile GPUs have been fast enough to composite every UI element from a separate texture, running complex pixel shaders on them, for ten years. OS X started doing this on laptops over 15 years ago, with integrated Intel graphics cards that are positively anaemic in comparison to anything in a vaguely recent phone. Android has provided a compositing UI toolkit from day one. Flutter, with its 60FPS default, runs very happily on a 2017 phone.
LineageOS is an AOSP build specifically designed to run fast and support legacy hardware, and is designed to look the same on all that hardware. It’s not a fair comparison to what people like to see with smartphone interfaces and launchers etc.
If it helps, I’m actually using the Microsoft launcher on both devices. But, again, you’re claiming that there are super magic UI features that are enabled by new hardware without saying what they are.
All innovation isn’t equal. Innovation that isn’t wanted by customers or their suppliers is malinvestment - a waste of human capacity, wealth, and time.
Innovation that isn’t wanted by customers or their suppliers is malinvestment - a waste of human capacity, wealth, and time.
What makes you think that this innovation is not wanted by customers?
There is innovation that is wanted by customers, but manufacturers don’t provide it because it goes against their interest. I think it’s a lie invisible-hand-believers tell themselves when claiming that customers have a choice between a fixable phone and a glued-phone with an appstore. Of course customers will chose the glued-phone with an app store, because they want a usable phone first. But this doesn’t mean they don’t want a fixable phone, it means that they were given a Hobson’s choice
but manufacturers don’t provide it because it goes against their interest.
The light-bulb cartel is the single worst example you could give; incandescent light-bulbs are dirt-cheap to replace and burning them hotter ends up improving the quality of their light (i.e. color) dramatically, while saving more in reduced power bills than they cost from shorter lifetimes. This 30min video by Technology Connections covers the point really well.
This cynical view is unwarranted in the case of EU, which so far is doing pretty well avoiding regulatory capture.
EU has a history of actually forcing companies to innovate in important areas that they themselves wouldn’t want to, like energy efficiency and ecological impact. And their regulations are generally set to start with realistic requirements, and are tightened gradually.
Not everything will sort itself out with consumers voting with their wallets. Sometimes degenerate behaviors (like vendor lock-in, planned obsolescence, DRM, spyware, bricking hardware when subscription for it expires) universally benefit companies, so all choices suck in one way or another. There are markets with high barriers to entry, especially in high-end electronics, and have rent-seeking incumbents that work for their shareholders’ interests, not consumers.
Ecodesign worked out wonderfully for vacuum cleaners, but that’s an appliance that hasn’t meaningfully changed since the 1930s. (You could argue that stick vacuum cleaners are different, but ecodesign certainly didn’t prevent them from entering the market)
The smartphone market has obviously been stagnating for a while, so it’ll be interesting to see if ecodesign can shake it up.
Ecodesign worked out wonderfully for vacuum cleaners, but that’s an appliance that hasn’t meaningfully changed since the 1930s
I strongly disagree here. They’ve changed massively since the ’90s. Walking around a vacuum cleaner shop in the ’90s, you had two choices of core designs. The vast majority had a bag that doubled as an air filter, pulling air through the bag and catching dust on the way. This is more or less the ’30s design (though those often had separate filters - there were quite a lot of refinements in the ’50s and ’60s - in the ’30s they were still selling ones that required a central compressor in the basement with pneumatic tubes that you plugged the vacuum cleaner into in each room).
Now, if you buy a vacuum cleaner, most of them use centrifugal airflow to precipitate heavy dust and hair, along with filters to catch the finer dust. Aside from the fact that both move air using electric motors, this is a totally different design to the ’30s models and to most of the early to mid ’90s models.
More recently, cheap and high-density lithium ion batteries have made cordless vacuums actually useful. These have been around since the ‘90s but they were pointless handheld things that barely functioned as a dustpan and brush replacement. Now they’re able to replace mains-powered ones for a lot of uses.
Oh, and that’s not even counting the various robot ones that can bounce around the floor unaided. These, ironically, are the ones whose vacuum-cleaner parts look the most like the ’30s design.
Just to add to that, the efficiency of most electrical home appliances has improved massively since the early ‘90s. With a few exceptions, like things based on resistive heating, which can’t improve much because of physics (but even some of those got replaced by devices with alternative heating methods) contemporary devices are a lot better in terms of energy efficiency. A lot of effort went into that, not only on the electrical end, but also on the mechanical end – vacuum cleaners today may look a lot like the ones in the 1930s but inside, from materials to filters, they’re very different. If you handed a contemporary vacuum cleaner to a service technician from the 1940s they wouldn’t know what to do with it.
Ironically enough, direct consumer demand has been a relatively modest driver of ecodesign, too – most consumers can’t and shouldn’t be expected to read power consumption graphs, the impact of one better device is spread across at least a two months’ worth of energy bills, and the impact of better electrical filtering trickles down onto consumers, so they’re not immediately aware of it. But they do know to look for energy classes or green markings or whatever.
But they do know to look for energy classes or green markings or whatever.
The eco labelling for white goods was one of the inspirations for this law because it’s worked amazingly well. When it was first introduced, most devices were in the B-C classification or worse. It turned out that these were a very good nudge for consumers and people were willing to pay noticeably more for higher-rated devices, to the point that it became impossible to sell anything with less than an A rating. They were forced to recalibrate the scheme a year or two ago because most things were A+ or A++ rated.
It turns out that markets work very well if customers have choice and sufficient information to make an informed choice. Once the labelling was in place, consumers were able to make an informed choice and there was an incentive for vendors to provide better quality on an axis that was now visible to consumers and so provided choice. The market did the rest.
Labeling works well when there’s a somewhat simple thing to measure to get the rating of each device - for a fridge it’s power consumption. It gets trickier when there’s no easy way to determine which of two devices is “better” - what would we measure to put a rating on a mobile phone or a computer?
I suppose the main problem is that such devices are multi-purpose - do I value battery life over FLOPS, screen brightness over resolution, etc. Perhaps there could be a multi-dimensional rating system (A for battery life, D for gaming performance, B for office work, …), but that gets unpractical very quickly.
There’s some research by Zinaida Benenson (I don’t have the publication to hand, I saw the pre-publication results) on an earlier proposal for this law that looked at adding two labels:
The number of years that the device would get security updates.
The maximum time between a vulnerability being disclosed and the device getting the update.
The proposal was that there would be statutory fines for devices that did not comply with the SLA outlined in those two labels but companies are free to put as much or as little as they wanted. Her research looked at this across a few consumer good classes and used the standard methodology where users were shown a small number of devices with different specs and different things on these labels and then asked to pick their preference. This was then used to vary price, features, and security SLA. I can’t remember the exact numbers but she found that users consistently were willing to select higher priced things with better security guarantees, and favoured them over some other features.
All the information I’ve read points to centrifugal filters not being meaningfully more efficient or effective than filter bags, which is why these centrifugal cylones are often backed up by traditional filters. Despite what James Dyson would have us believe, building vacuum cleaners is not like designing a Tokamak. I’d use them as an example of a meaningless change introduced to give consumers an incentive to upgrade devices that otherwise last decades.
Stick (cordless) vacuums are meaningfully different in that the key cleaning mechanism is no longer suction force. The rotating brush provides most of the cleaning action, coupled with a (relatively) weak suction provided by the cordless engines. This makes them vastly more energy-efficient, although this is probably cancelled out by the higher impact of production, and the wear and tear on the components.
It also might be a great opportunity for innovation in modular design. Say, Apple is always very proude when they come up with a new design. Remember a 15 min mini-doc on their processes when they introduced unibody macbooks? Or 10 min video bragging about their laminated screens?
I don’t see why it can’t be about how they designed a clever back cover that can be opened without tools to replace the battery and also waterproof. Or how they came up with a new super fancy screen glass that can survive 45 drops.
Depending on how you define “progress” there can be a plenty of opportunities to innovate. Moreover, with better repairability there are more opportunities for modding. Isn’t it a “progress” if you can replace one of the cameras on your iPhone Pro with, say, infrared camera? Definitely not a mainstream feature to ever come to mass-produced iPhone but maybe a useful feature for some professionals. With available schematics this might have a chance to actually come to market. There’s no chance for it to ever come to a glued solid rectangle that rejects any part but the very specific it came with from the factory.
That’s one way to think about it. Another is that shaping markets is one of the primary jobs of the government, and a representative government – which, for all its faults, the EU is – delegates this job to politics. And folks make a political decision on the balance of equities differently, and … well, they decide how the markets should look. I don’t think that “innovation” or “efficiency” at providing what the market currently provides is anything like a dispositive argument.
This overall shift will favor long term R&D investments of the kind placed before our last two decades of boom. It will improve innovation in the same way that making your kid eats vegetables improves their health. This is necessary soil for future booms.
What the hell does “manufacturers will have to make compatible software updates available for at least 5 years” mean? Who determines what bugs now legally need to be fixed on what schedule? What are the conditions under which this rule is considered to have been breached? What if the OS has added features that are technically possible on older devices but would chew up batteries too quickly because of missing accelerators found on newer devices? This is madness.
Took me all of 10 minutes to find the actual law instead of a summary. All of the questions you have asked have pretty satisfying answers there IMO.
From the “Operating system updates” section:
from the date of end of placement on the market to at least 5 years after that date, manufacturers, importers or authorised representatives shall, if they provide security updates, corrective updates or functionality updates to an operating system, make such updates available at no cost for all units of a product model with the same operating system;
If you fix a bug or security issue, you have to backport it to all models you’ve offered for sale in the past 5 years. If your upstream (android) releases a security fix, you must release it within 4 months (part c of that section).
Part F says if your updates slow down old models, you have to restore them to good performance within “a reasonable time” (yuck, give us a number). An opt-in feature toggle that enables new features but slows down the device is permitted, which I suspect is how your last question would be handled in practice.
What’s really funny to me is despite how people go “oh yeah, Apple’s going to be suffering from this new EU legislation”, they’re already mostly if not fully compliant. Usually by more than anyone else - software updates are the big one, as are parts availability. The battery one has specific carveouts that Apple devices at first glance, seem already compliant with.
If anything, this addresses a market failure. Apple hardware generally has way higher resale value than other manufacturers. Partly it’s because of cachet, but partly it’s because you can count on Apple to support the device for a long time. When our kid needed a new phone we get a refurbished iPhone 8 (same one I am using) and we could do that with the expectation it was going to be supported for at least a year or more.
Anecdotally I seem to remember buyers of high-end Google Pixel phones complaining that they don’t get any software updates after a certain time. I might be misremembering it though.
It’s easy for Apple because they get revenue from app sales and from iCloud and other attached services. Once you buy an iThing, Apple is best placed to sell you more things. Even if they sold the hardware at cost, they’d make money (just less). In contrast, Google makes money on Android phones after the initial sale but the manufacturer pays the cost of testing security fixes. Google has pulled a few high-risk things like the web view into the Google Play update mechanism but they still rely on manufactures a lot and there’s no incentive for manufacturers to do something that makes them no money and makes it less likely that someone will buy a new phone.
That’s exactly why I’m a bit bothered when they are trialed on their AppStore dominance… You don’t buy hardware, you buy a device part of an ecosystem.
too little, too late, but better than nothing.
I hope this will lead to a consolidation of hardware and a slower pace of change.
Looks like the beginning of the end of the fantastic progress in tech that’s resulted from a relative lack of regulation.
Also, probably, a massive spike in grift jobs as people are hired to ensure compliance.
Looks like the beginning of the end for the unnecessarey e-waste provoked by companies forcing obselence and anti-consumer patterns made possible by the lack of regulations.
It’s amazing that no matter how good the news is about a regulation you’ll always be able to find someone to complain about how it harms some hypothetical innovation.
Sure. Possibly that too - although I’d be mildly surprised if the legislation actually delivers the intended upside, as opposed to just delivering unintended consequences.
And just to be clear: the unintended consequences here include the retardation of an industry that’s delivered us progress from 8 bit micros with 64KiB RAM to pervasive Internet and pocket supercomputers in one generation.
Edited to add: I run a refurbished W540 with Linux Mint as a “gaming” laptop, a refurbished T470s with FreeBSD as my daily driver, a refurbished Pixel 3 with Lineage as my phone, and a PineTime and Pine Buds Pro. I really do grok the issues with the industry around planned obsolescence, waste, and consumer hostility.
I just still don’t think the cost of regulation is worth it.
I’m a EU citizen, and I see this argument made every single time the EU passes a new legislation affecting tech. So far, those worries never materialized.
I just can’t see why having removeable batteries would hinder innovation. Each company will still want to sell their prducts, so they will be pressed to find creative ways to have a sleek design while meeting regulations.
Do you think Apple engineers are not capeable of designing AirPods that have a removeable battery? The battery is even in the stem, so it could be as simple as having the stem be de-tacheable. It was just simpler to super-glue everything shut, plus it comes with the benefit of forcing consumers to upgrade once their AirPods have unusable battery life.
Also, if I’m not mistaken it is about service-time replaceable battery, not “drop-on-the-floor-and-your-phone-is-in-6-parts” replaceable as in the old times.
In the specific case of batteries, yep, you’re right. The legislation actually carves special exception for batteries that’s even more manufacturer-friendly than other requirements – you can make devices with batteries that can only be replaced in a workshop environment or by a person with basic repair training, or even restrict access to batteries to authorised partners. But you have to meet some battery quality criteria and a plausible commercial reason for restricting battery replacement or access to batteries (e.g. an IP42 or, respectively, IP67 rating).
Yes, I know, what about the extra regulatory burden: said battery quality criteria are just industry-standard rating methods (remaining capacity after 500 and 1,000 cycles) which battery suppliers already provide, so manufacturers that currently apply the CE rating don’t actually need to do anything new to be compliant. In fact the vast majority of devices on the EU market are already compliant, if anyone isn’t they really got tricked by whoever’s selling them the batteries.
The only additional requirements set in place is that fasteners have to be resupplied or reusable. Most fasteners that also perform electrical functions are inherently reusable (on account of being metallic) so in practice that just means, if your batteries are fastened with adhesive, you have to provide that (or a compatible) adhesive for the prescribed duration. As long as you keep making devices with adhesive-fastened batteries that’s basically free.
i.e. none of this requires any innovation of any kind – in fact the vast majority of companies active on the EU market can keep on doing exactly what they’re doing now modulo exclusive supply contracts (which they can actually keep if they want to, but then they have to provide the parts to authorised repair partners).
Man do I ever miss those days though. Device not powering off the way I’m telling it to? Can’t figure out how to get this alarm app to stop making noise in this crowded room? Fine - rip the battery cover off and forcibly end the noise. 100% success rate.
You’re enjoying those ubiquitous “This site uses cookies” pop-ups, then?
Of course they’re capable, but there are always trade-offs. I am very skeptical that something as tiny and densely packed as an AirPod could be made with removeable parts without becoming a lot less durable or reliable, and/or more expensive. Do you have the hardware/manufacturing expertise to back up your assumptions?
I don’t know where the battery is in an AirPod, but I do know that lithium-polymer batteries can be molded into arbitrary shapes and are often designed to fill the space around the other components, which tends to make them difficult or impossible to remove.
Those aren’t required by law; those happen when a company makes customer-hostile decisions and wants to deflect the blame to the EU for forcing them to be transparent about their bad decisions.
Huh? Using cookies is “user-hostile”? I mean, I actually remember using the web before cookies were a thing, and that was pretty user-unfriendly: all state had to be kept in the URL, and if you hit the Back button it reversed any state, like what you had in your shopping cart.
That kind of cookie requires no popup though, only the ones used to shared info with third parties or collect unwarranted information.
I can’t believe so many years later people still believe the cookie law applies to all cookies.
Please educate yourself: the law explicitly applies only to cookies used for tracking and marketing purposes, not for funcional purposes.
The law also specified that the banner must have a single button to “reject all cookies”, so any website that ask you to go trought a complex flow to reject your consent is not compliant.
It requires consent for all but “strictly necessary” cookies. According to the definitions on that page, that covers a lot more than tracking and marketing. For example “ choices you have made in the past, like what language you prefer”, or “statistics cookies” whose “ sole purpose is to improve website function”. Definitely overreach.
we do know it and it’s a Li-Ion button cell https://guide-images.cdn.ifixit.com/igi/QG4Cd6cMiYVcMxiE.large
FWIW this regulation doesn’t apply to the Airpods. But if for some reason it ever did, and based on the teardown here, the main obstacle for compliance is that the battery is behind a membrane that would need to be destroyed. A replaceable fastener that would allow it to be vertically extracted, for example, would allow for cheap compliance. If Apple got their shit together and got a waterproof rating, I think they could actually claim compliance without doing anything else – it looks like the battery is already replaceable in a workshop environment (someone’s done it here) and you can still do that.
(But do note that I’m basing this off pictures, I never had a pair of AirPods – frankly I never understood their appeal)
Sure, Apple is capable of doing it. And unlike my PinePhone the result would be a working phone ;)
But the issue isn’t a technical one. It’s the costs involved in finding those creative ways, to hiring people to ensure compliance, and especially to new entrants to the field.
It’s demonstrably untrue that the costs never materialise. Speak to business owners about the cost of regulatory compliance sometime. Red tape is expensive.
What is the alternative?
Those companies are clearly engaging in anti-consumer behavior, actively trying to stop right to repair and more.
The industry demonstrated to be incapable of self-regulating, so I think it’s about time to force their hand.
This law can be read in its entirety in a few minutes, it’s reasonable and to the point.
Is that a trick question? The alternative is not regulating, and it’s delivered absolutely stunning results so far. Again: airgapped 8 bit desk toys to pocket supercomputers with pervasive Internet in a generation.
Edited to add: and this isn’t a new problem they’re dealing with; Apple has been pulling various customer-hostile shit moves since Jobs’ influence outgrew Woz’s:
(from https://www.folklore.org/StoryView.py?project=Macintosh&story=Diagnostic_Port.txt )
Edited to add, again: I mean this without snark, coming from a country (Australia) that despite its larrikin reuptation is astoundingly fond of red tape, regulation, conformity, and conservatism. But I think there’s a reason Silicon Valley is in America, and not either Europe or Australasia, and it’s cultural as much as it’s economic.
Did a standard electric plug also stiffle innovation? Or mandates about a car having to fit on a lane?
Laws are the most important safety lines we have, otherwise companies would just optimize for profit in malicious ways.
The reason is literally buckets and buckets of money from defense spending. You should already know this.
It’s not just that. Lots of people have studied this and one of the key reasons is that the USA has a large set of people with disposable income that all speaks the same language. There was a huge amount of tech innovation in the UK in the ’80s and ’90s (contemporaries of Apple, Microsoft, and so on) but very few companies made it to international success because their US competitors could sell to a market (at least) five times the size before they needed to deal with export rules or localisation. Most of these companies either went under because US companies had larger economies of scale or were bought by US companies.
The EU has a larger middle class than the USA now, I believe, but they speak over a dozen languages and expect products to be translated into their own locales. A French company doesn’t have to deal with export regulations to sell in Germany, but they do need to make sure that they translate everything (including things like changing decimal separators). And then, if they want to sell in Spain, they need to do all of that again. This might change in the next decade, since LLM-driven machine translation is starting to be actually usable (helped for the EU by the fact that the EU Parliament proceedings are professionally translated into all member states’ languages, giving a fantastic training corpus).
The thing that should worry American Exceptionalists is that the middle class in China is now about as large as the population of America and they all read the same language. A Chinese company has a much bigger advantage than a US company in this regard. They can sell to at least twice as many people with disposable income without dealing with export rules or localisation than a US company.
That’s one of the reasons but it’s clearly not sufficient. Other countries have spent up on taxpayer’s purse and not spawned a silicon valley of their own.
“Spent up”? At anything near the level of the USA??
Yeah.
https://en.m.wikipedia.org/wiki/History_of_computing_in_the_Soviet_Union
But they failed basically because of the Economic Calculation Problem - even with good funding and smart people, they couldn’t manufacture worth a damn.
https://en.m.wikipedia.org/wiki/Economic_calculation_problem
Money - wherever it comes from - is an obvious prerequisite. But it’s not sufficient - you need a (somewhat at least) free economy and a consequently functional manufacturing capacity. And a culture that rewards, not kills or jails, intellectual independence.
But they did spawn a Silicon Valley of their own:
https://en.wikipedia.org/wiki/Zelenograd
The Wikipedia cites a number of factors:
Government spending tends to help with these kind of things. As it did for the foundations of the Internet itself. Attributing most of the progress we had so far to lack of regulation is… unwarranted at best.
Besides, it’s not like anyone is advocating we go back in time and regulate the industry to prevent current problems without current insight. We have specific problems now that we could easily regulate without imposing too much a cost on manufacturers: there’s a battery? It must be replaceable by the end user. Device pairing prevents third party repairs? Just ban it. Or maybe keep it, but provide the tools to re-pair any new component. They’re using proprietary connectors? Consider standardising it all to USB-C or similar. It’s a game of whack-a-mole, but at least this way we don’t over-regulate.
Beware comrade, folks will come here to make a slippery slope arguments about how requiring battery replacements & other minor guard rails towards consumer-forward, e-waste-reducing design will lead to the regulation of everything & fully stifle all technological progress.
What I’d be more concerned is how those cabals weaponize the legislation in their favor by setting and/or creating the standards. I look at how the EU is saying all these chat apps need to quit that proprietary, non-cross-chatter behavior. Instead of reverting their code to the XMPP of yore, which is controlled by a third-party committee/community, that many of their chats were were designed after, they want to create a new standard together & will likely find a way to hit the minimum legal requirements while still keeping a majority of their service within the garden or only allow other big corporate players to adapt/use their protocol with a 2000-page specification with bugs, inconsistencies, & unspecified behavior.
Whack enough moles and over-regulation is exactly what you get - a smothering weight of decades of incremental regulation that no-one fully comprehends.
One of the reason the tech industry can move as fast as it does is that it hasn’t yet had the time to accumulate this - or the endless procession of grifting consultants and unions that burden other industries.
It isn’t exactly what you get. You’re not here complaining about the fact that your mobile phone electrocutes you or gives you RF burns of stops your TV reception - because you don’t realise that there is already lots of regulation from which you benefit. This is just a bit more, not the straw-man binary you’re making out it to be.
I am curious however: do you see the current situation as tenable? You mention above that there are anti-consumerist practices and the like, but also express concern that regulation will quickly slippery slope away, but I am curious if you think the current system where there is more and more lock in both on the web and in devices can be pried back from those parties?
Why are those results stunning? Is there any reason to think that those improvements were difficult in the first place?
There are a lot of economic incentives, and it was a new field of science application, that has benefited from so many other fields exploding at the same time.
It’s definitely not enough to attribute those results to the lack of regulation. The “utility function” might have just been especially ripe for optimization in that specific local area, with or without regulations.
Now, we see monopolies appearing again and associated anti-consumer decisions to the benefit of the bigger players. This situation is well-known – tragedy of the common situations in markets is never fixed by the players themselves.
Your alternative of not doing anything hinges on the hope that your ideologically biased opinion won’t clash with reality. It’s naive to believe corporations not to attempt to maximize their profits when they have an opportunity.
Well, I guess I am wrong then, but I prefer slower progress, slower computers, and generating less waste than just letting companies do all they want.
This did not happen without regulation. The FCC exists for instance. All of the actual technological development was funded by the government, if not conducted directly by government agencies.
As a customer, I react to this by never voluntarily buying Apple products. And I did buy a Framework laptop when it first became available, which I still use. Regulations that help entrench Apple and make it harder for new companies like Framework to get started are bad for me and what I care about with consumer technology (note that Framework started in the US, rather than the EU, and that in general Europeans immigrate to the US to start technology companies rather than Americans immigrating to the EU to do the same).
Which is reasonable. Earlier albertorestifo spoke about legislation “forc[ing] their hand” which is a fair summary - it’s the use of force instead of voluntary association.
(Although I’d argue that anti-circumvention laws, etc. prescribing what owners can’t do with their devices is equally wrong, and should also not be a thing).
The problem with voluntary association is that most people don’t know what they’re associating with when they buy a new product. Or they think short term, only to cry later when repairing their device is more expensive than buying a new one.
There’s a similar tension at play with GitHub’s rollout of mandatory 2FA: it really annoys me, adding TOTP didn’t improve my security by one iota (I already use KeepassXC), but many people do use insecure passwords, and you can’t tell by looking at their code. (In this analogy GitHub plays the role of the regulator.)
I mean, you’re not wrong. But don’t you feel like the solution isn’t to infantilise people by treating them like they’re incapable of knowing?
For what it’s worth I fully support legislation enforcing “plain $LANGUAGE” contracts. Fraud is a species of violence; people should understand what they’re signing.
But by the same token, if people don’t care to research the repair costs of their devices before buying them … why is that a problem that requires legislation?
They’re not, if we give them access to the information, and there are alternatives. If all the major phone manufacturers produce locked down phones with impossible to swap components (pairing), that are supported only for 1 year, what’s people to do? If people have no idea how secure the authentication of someone is on GitHub, how can they make an informed decision about security?
When important stuff like that is prominently displayed on the package, it does influence purchase decisions. So people do care. But more importantly, a bad score on that front makes manufacturers look bad enough that they would quickly change course and sell stuff that’s easier to repair, effectively giving people more choice. So yeah, a bit of legislation is warranted in my opinion.
I’m not a business owner in this field but I did work at the engineering (and then product management, for my sins) end of it for years. I can tell you that, at least back in 2016, when I last did any kind of electronics design:
Execs will throw their hands in the air and declare anything super-expensive, especially if it requires them to put managers to work. They aren’t always wrong but in this particular case IMHO they are. The additional design-time costs this bill imposes are trivial, and at least some of them can be offset by costs you save elsewhere on the manufacturing chain. Also, well-ran marketing and logistics departments can turn many of its extra requirements into real opportunities.
I don’t want any of these things more than I want improved waterproofing. Why should every EU citizen that has the same priorities I do not be able to buy a the device they want?
Then I have some very good news for you!
The law doesn’t prohibit waterproof devices. In fact, it makes clear expections for such cases. It mandates that the battery must be repleaceable without specialized tools and by any competent shop, it doesn’t mandate a user-replaceable battery.
I don’t want to defend the bill (I’m skeptical of politicians making decisions on… just about anything, given how they operate) but I don’t think recourse to history is entirely justified in this case.
For one thing, good repairability and support for most of (if not throughout) a device’s useful lifetime was the norm for a good part of that period, and it wasn’t a hardware-only deal. Windows 3.1 was supported until 2001, almost twice longer than the bill demands. NT 3.1 was supported for seven years, and Windows 95 was supported for 6. IRIX versions were supported for 5 (or 7?) years, IIRC.
For another, the current state of affairs is the exact opposite of what deregulation was supposed to achieve, so I find it equally indefensible on (de)regulatory grounds alone. Manufacturers are increasingly convincing users to upgrade not by delivering better and more capable products, but by making them both less durable and harder to repair, and by restricting access to security updates. Instead of allowing businesses to focus on their customers’ needs rather than state-mandated demands, it’s allowing businesses to compensate their inability to meet customer expectations (in terms of device lifetime and justified update threshold) by delivering worse designs.
I’m not against that on principle but I’m also not a fan of footing the bill for all the extra waste collection effort and all the health hazards that generates. Private companies should be more than well aware that there’s no such thing as a free lunch.
Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.
Deregulation is the “ground state”.
It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.
Conveniently, it fosters an environment of rapid growth in wealth, capability, and efficiency. Because when companies do what you’re suggesting - nerfing their products to drive revenue - customers go elsewhere.
Which is why you’ll see the greatest proponents of regulation are the companies themselves, these days. Anti-circumvention laws, censorship laws that are only workable by large companies, Government-mandated software (e.g. Korean banking, Android and iOS only identity apps in Australia) and so forth are regulation aimed against customers.
So there’s a part of me that thinks companies are reaping what they sowed, here. But two wrongs don’t make a right; the correct answer is to deregulate both ends.
Maybe. Most early home computers were expensive. People expected them to last a long time. In the late ’80s, most of the computers that friends of mine owned were several years old and lasted for years. The BBC Model B was introduced in 1981 and was still being sold in the early ‘90s. Schools were gradually phasing them out. Things like the Commodore 64 of Sinclair Spectrum had similar longevity. There were outliers but most of them were from companies that went out of business and so wouldn’t be affected by this kind of regulation.
That’s not really true. It assumes a balance of power that is exactly equal between companies and consumers.
Companies force people to upgrade by tying in services to the device and then dropping support in the services for older products. No one buys a phone because they want a shiny bit of plastic with a thinking rock inside, they buy a phone to be able to run programs that accomplish specific things. If you can’t safely connect the device to the Internet and it won’t run the latest apps (which are required to connect to specific services) because the OS is out of date, then they need to upgrade the OS. If they can’t upgrade the OS because the vendor doesn’t provide an upgrade and no one else can because they have locked down the bootloader (and / or not documented any of the device interfaces), then consumers have no choice to upgrade.
Only if there’s another option. Apple controls their app store and so gets a 30% cut of app revenue. This gives them some incentive to support old devices, because they can still make money from them, but they will look carefully at the inflection point where they make more money from upgrades than from sales to older devices. For other vendors, Google makes money from the app store and they don’t[1] and so once a handset has shipped, the vendor has made as much money as they possibly can. If a vendor makes a phone that gets updates longer, then it will cost more. Customers don’t see that at point of sale, so they don’t buy it. I haven’t read the final version of this law, one of the drafts required labelling the support lifetime (which research has shown will have a big impact - it has a surprisingly large impact on purchasing decisions). By moving the baseline up for everyone, companies don’t lose out by being the one vendor to try to do better.
Economists have studied this kind of market failure for a long time and no one who actually does research in economics (i.e. making predictions and trying to falsify them, not going on talk shows) has seriously proposed deregulation as the solution for decades.
Economies are complex systems. Even Adam Smith didn’t think that a model with a complete lack of regulation would lead to the best outcomes.
[1] Some years ago, the Android security team was complaining about the difficulties of support across vendors. I suggested that Google could fix the incentives in their ecosystem by providing a 5% cut of all app sales to the handset maker, conditional on the phone running the latest version of Android. They didn’t want to do that because Google maximising revenue is more important than security for users.
That is remarkably untrue. At least one entire school of economics proposes exactly that.
In fact, they dismiss the entire concept of market failure, because markets exist to provide pricing and a means of exchange, nothing more.
“Market failure” just means “the market isn’t producing the prices I want”.
Is the school of economics you’re talking about actual experimenters, or are they arm-chair philosophers? I trust they propose what you say they propose, but what actual evidence do they have?
I might sound like I’m dismissing an entire scientific discipline, but economics have shown strong signs of being extremely problematic on this front for a long time. One big red flag for instance is the existence of such long lived “schools”, which are a sign of dogma more than they’re a sign of sincere inquiry.
Assuming there’s no major misunderstanding, there’s another red flag right there: markets have a purpose now? Describing what markets do is one thing, but ascribing purpose to them presupposes some sentient entity put them there with intent. Which may very well be true, but then I would ask a historian, not an economist.
Now looking at the actual purpose… the second people exchange stuff for a price, there’s a pricing and a means of exchange. Those are the conditions for a market. Turning it around and making them the “purpose” of market is cheating: in effect, this is saying markets can’t fail by definition, which is quite unhelpful.
This is why I specifically said practicing economists who make predictions. If you actually talk to people who do research in this area, you’ll find that they’re a very evidence-driven social science. The people at the top of the field are making falsifiable predictions based on models and refining their models when they’re wrong.
Economics is intrinsically linked to politics and philosophy. Economic models are like any other model: they predict what will happen if you change nothing or change something, so that you can see whether that fits with your desired outcomes. This is why it’s so often linked to politics and philosophy: Philosophy and politics define policy goals, economics lets you reason about whether particular actions (or inactions) will help you reach those goals. Mechanics is linked to engineering in the same way. Mechanics tells you whether a set of materials arranged in a particular way will be stable, engineering says ‘okay, we want to build a bridge’ and then uses models from mechanics to determine whether the bridge will fall down. In both cases, measurement errors or invalid assumptions can result in the goals not being met when the models say that they should be and in both cases these lead to refinements of the models.
To people working in the field, the schools are just shorthand ways of describing a set of tools that you can use in various contexts.
Unfortunately, most of the time you hear about economics, it’s not from economists, it’s from people who play economists on TV. The likes of the Cato and Mises institutes in the article, for example, work exactly the wrong way around: they decide what policies they want to see applied and then try to tweak their models to justify those policies, rather than looking at what goals they want to see achieved and using the models to work out what policies will achieve those goals.
I really would recommend talking to economists, they tend to be very interesting people. And they hate the TV economists with a passion that I’ve rarely seen anywhere else.
Markets absolutely have a purpose. It is always a policy decision whether to allow a market to exist. Markets are a tool that you can use to optimise production to meet demand in various ways. You can avoid markets entirely in a planned economy (but please don’t, the Great Leap Forward or the early days of the USSR give you a good idea of how many people will die if you do). Something that starts as a market can end up not functioning as a market if there’s a significant power imbalance between producers and consumers.
Markets are one of the most effective tools that we have for optimising production for requirements. Precisely what they will optimise for depends a lot on the shape of the market and that’s something that you can control with regulation. The EU labelling rules on energy efficiency are a great example here. The EU mandated that white goods carry labels showing the score that they got on energy-efficiency tests. The labelling added information to customer and influenced their purchasing decisions. This created demand for more energy-efficient goods and the market responded by providing them. The eventually regulations banned goods below a certain efficiency rating but it was largely unnecessary because the market adjusted and most things were A rated or above when F ratings were introduced. It worked so well that they had to recalibrate the scale.
I can see how such usurpation could distort my view.
Well… yeah.
I love this example. Plainly shows that often people don’t make the choices they do because they don’t care about such and such criterion, they do so because they just can’t measure the criterion even if they cared. Even a Libertarian should admit that making good purchase decisions requires being well informed.
To be honest I do believe some select parts of the economy should be either centrally planned or have a state provider that can serve everyone: roads, trains, water, electricity, schools… Yet at the same time, other sectors probably benefit more from a Libertarian approach. My favourite example is the Internet: the fibre should be installed by public instances (town, county, state…), and bandwidth rented at a flat rate — no discount for bigger volumes. And then you just let private operators rent the bandwidth however they please, and compete among each other. The observed results in the few places in France that followed this plan (mostly rural areas big private providers didn’t want to invest in) was a myriad of operators of all sizes, including for-profit and non-profit ones (recalling what Benjamin Bayart said of the top of my head). This gave people an actual choice, and this diversity inherently makes this corner of the internet less controllable and freer.
A Libertarian market on top of a Communist infrastructure. I suspect we can find analogues in many other domains.
This is great initially, but it’s not clear how you pay for upgrades. Presumably 1 Gb/s fibre is fine now, but at some point you’re going to want to migrate everyone to 10 Gb/s or faster, just as you wanted to upgrade from copper to fibre. That’s going to be capital investment. Does it come from general taxation or from revenue raised on the operators? If it’s the former, how do you ensure it’s equitable, if it’s the latter then you’re going to want to amortise the cost across a decade and so pricing sufficiently that you can both maintain the current infrastructure and save enough to upgrade to as-yet-unknown future technology can be tricky.
The problem with private ownership of utilities is that it encourages rent seeking and cutting costs at the expense of service and capital investment. The problem with public ownership is that it’s hard to incentivise efficiency improvements. It’s important to understand the failure modes of both options and ideally design hybrids that avoid the worst problems of both. The problem is that most politicians start with ‘privatisation is good’ or ‘privatisation is bad’ as an ideological view and not ‘good service, without discrimination, at an affordable price is good’ and then try to figure out how to achieve it.
Yes, that’s the point: the most capitalistic something is (extreme example: nuclear power plants), the more difficult private enterprises will want to invest in it, and if they do, the more they will want to extract rent from their investment. There’s also the thing about fibre (or copper) being naturally monopolistic, at least if you have a mind to conserve resources and not duplicate lines all over the place.
So there is a point where people must want the thing badly enough that the town/county/state does the investment itself. As it does for any public infrastructure.
Not saying this would be easy though. The difficulties you foresee are spot on.
Ah, I see. Part of this can be solved by making sure the public part is stable, and the private part easy to invest on. For instance, we need boxes and transmitters and whatnot to lighten up the fibre. I speculate that those boxes are more liable to be improved than the fibre itself, so perhaps we could give them to private interests. But this is reaching the limits of my knowledge of the subject, I’m not informed enough to have an opinion on where the public/private frontier is best placed.
Good point, I’ll keep that in mind.
There’s a lot of nuance here. Private enterprise is quite good at high-risk investments in general (nuclear power less so because it’s regulated such that you can’t just go bankrupt and walk away, for good reasons). A lot of interesting infrastructure were possible because private investors gambled and a lot of them lost a big pile of money. For example, the Iridium satellite phone network cost a lot to deliver and did not recoup costs. The initial investors lost money, but then the infrastructure was for sale at a bargain price and so it ended up being operated successfully. It’s not clear to me how public investment could have matched that (without just throwing away tax payers’ money).
This was the idea behind some of the public-private partnership things that the UK government pushed in the ‘90s (which often didn’t work, you can read a lot of detailed analyses of why not if you search for them): you allow the private sector to take the risk and they get a chunk of the rewards if the risk pays off but the public sector doesn’t lose out if the risk fails. For example, you get a private company to build a building that you will lease from them. They pay all of the costs. If you don’t need the building in five years time then it’s their responsibility to find another tenant. If the building needs unexpected repairs, they pay for them. If everything goes according to plan, you pay a bit more for the building space than if you’d built, owned, and operated it yourself. And you open it out to competitive bids, so if someone can deliver at a lower cost than you could, you save money.
Some procurement processes have added variations on this where the contract goes to the second lowest bidder or they the winner gets paid what the next-lowest bidder asked for. The former disincentivises stupidly low bids (if you’re lower than everyone else, you don’t win), the latter ensures that you get paid as much as someone else thought they could deliver, reducing risk to the buyer. There are a lot of variations on this that are differently effective and some economists have put a lot of effort into studying them. Their insights, sadly, are rarely used.
The dangerous potholes throughout UK roads might warn you that this doesn’t always work.
Good point. We need to make sure that these gambles stay gambles, and not, say, save the people who made the bad choice. Save their company perhaps, but seize it in the process. We don’t want to share losses while keeping profits private — which is what happens more often than I’d like.
The intent is good indeed, and I do have an example of a failure in mind: water management in France. Much of it is under a private-public partnership, with Veolia I believe, and… well there are a lot of leaks, a crapton of water is wasted (up to 25% in some of the worst cases), and Veolia seems to be making little more than a token effort to fix the damn leaks. Probably because they don’t really pay for the loss.
It’s often a matter oh how much money you want to put in. Public French roads are quite good, even if we exclude the super highways (those are mostly privatised, and I reckon in even better shape). Still, point taken.
Were they actually successful, or did they only decrease operating energy use? You can make a device that uses less power because it lasts half as long before it breaks, but then you have to spend twice as much power manufacturing the things because they only last half as long.
I don’t disagree with your comment, by the way. Although, part of the problem with planned economies was that they just didn’t have the processing power to manage the entire economy; modern computers might make a significant difference, the only way to really find out would be to set up a Great Leap Forward in the 21st century.
I may be misunderstanding your question but energy ratings aren’t based on energy consumption across the device’s entire lifetime, they’re based on energy consumption over a cycle of operation of limited duration, or a set of cycles of operations of limited duration (e.g. a number of hours of functioning at peak luminance for displays, a washing-drying cycle for washer-driers etc.). You can’t get a better rating by making a device that lasts half as long.
Energy ratings and device lifetimes aren’t generally linked by any causal relation. There are studies that suggest the average lifetime for (at least some categories of) household appliances have been decreasing in the last decades, but they show about the same thing regardless of jurisdiction (i.e. even those without labeling or energy efficiency rules, or with different labeling rules) and it’s a trend that started prior to energy efficiency labeling legislation in the EU.
Not directly, but you can e.g. make moving parts lighter/thinner, so they take less power to move but break sooner as a result of them being thinner.
That’s good to hear.
For household appliances, energy ratings are given based on performance under full rated capacities. Moving parts account for a tiny fraction of that in washing machines and washer-driers, and for a very small proportion of the total operating power in dishwashers and refrigerators (and obviously no proportion for electronic displays and lighting sources). They’re also given based on measurements of KWh/cycle rounded to three decimal places.
I’m not saying making some parts lighter doesn’t have an effect for some of the appliances that get energy ratings, but that effect is so close to the rounding error that I doubt anyone is going to risk their warranty figures for it. Lighter parts aren’t necessarily less durable, so if someone’s trying to get a desired rating by lightening the nominal load, they can usually get the same MTTF with slightly better materials, and they’ll gladly swallow some (often all) of the upfront cost just to avoid dealing with added uncertainty of warranty stocks.
Much like orthodox Marxism-Leninism, the Austrian School describes economics by how it should be, not how it actually is.
The major problem with orphans was lack of access to proprietary parts – they were otherwise very repairable. The few manufacturers that can afford proprietary parts today (e.g. Apple) aren’t exactly at risk of going under, which is why that fear is all but gone today.
I have like half a dozen orphan boxes in my collection. Some of them were never sold on Western markets, I’m talking things like devices sold only on the Japanese market for a few years or Soviet ZX Spectrum clones. All of them are repairable even today, some of them even with original parts (except, of course, for the proprietary ones, which aren’t manufactured anymore so you can only get them from existing stocks, or use clone parts). It’s pretty ridiculous that I can repair thirty year-old hardware just fine but if my Macbook croaks, I’m good for a new one, and not because I don’t have (access to) equipment but because I can’t get the parts, and not because they’re not manufactured anymore but because no one will sell them to me.
Deregulation was certainly meant to achieve a lot of things in particular. Not just general outcomes, like a more competitive landscape and the like – every major piece of deregulatory legilslation has had concrete goals that it sought to achieve. Most of them actually achieved them in the short run – it was conserving these achievements that turned out to be more problematic.
As for companies not being able to force customers not to upgrade, repair or tinker with their devices, that is really not true. Companies absolutely can and do force customers to not upgrade or repair their devices. For example, they regularly use exclusive supply deals to ensure that customers can’t get the parts they need for it, which they can do without leveraging any government-mandated regulation.
Some of their means are regulation-based – e.g. they take them customers or third-parties to court (see e.g. Apple. For most devices, tinkering with them in unsupported ways is against the ToS, too, and while there’s always doubt on how much of that is legally enforceable in each jurisdiction out there, it still carries legal risk, in addition to the weight of force in jurisdictions where such provisions have actually been enforced.
This is very far from a state of minimal initiation of force. It’s a state of minimal initiation of force on the customer end, sure – customers have little financial power (both individually and in numbers, given how expensive organisation is), so in the absence of regulation they can leverage, they have no force to initiate. But companies have considerable resources of force at their disposal.
It’s not like there was heavy progress the last 10 years on smartphone hardware.
Since 2015 every smartphone is the same as the previous model, with a slightly better camera and a better chip. I don’t see how the regulation is making progress more difficult. IMHO it will drive innovation, phones will have to be made more durable.
And, for most consumers, the better camera is the only thing that they notice. An iPhone 8 is still massively overpowered for what a huge number of consumers need, and it was released five years ago. If anything, I think five years is far too short a time to demand support.
Until that user wants to play a mobile game–in which like PC hardware specs were propelled by gaming, so is the mobile market driven by games which I believe is now the most dominant gaming platform.
I don’t think the games are really that CPU / GPU intensive. It’s definitely the dominant gaming platform, but the best selling games are things like Candy Crush (which I admit to having spent far too much time playing). I just upgraded my 2015 iPad Pro and it was fine for all of the games that I tried from the app store (including the ones included with Netflix and a number of the top-ten ones). The only thing it struggled with was the Apple News app, which seems to want to preload vast numbers of articles and so ran out of memory (it had only 2 GiB - the iPhone version seems not to have this problem).
The iPhone 8 (five years old) has an SoC that’s two generations newer than my old iPad, has more than twice as much L2 cache, two high-performance cores that are faster than the two cores in mine (plus four energy-efficient cores, so games can have 100% use of the high-perf ones), and a much more powerful GPU (Apple in-house design replacing a licensed PowerVR one in my device). Anything that runs on my old iPad will barely warm up the CPU/GPU on an iPhone 8.
But a lot are intensive & enthusiasts often prefer it. But still those time-waster types and e-sports tend to run on potatoes to grab the largest audience.
Anecdotally, I recently was reunited with my OnePlus 1 (2014) running Lineage OS, & it was choppy at just about everything (this was using the apps from when I last used it (2017) in airplane mode so not just contemporary bloat) especially loading map tiles on OSM. I tried Ubuntu Touch on it this year (2023) (listed as great support) & was still laggy enough that I’d prefer not to use it as it couldn’t handle maps well. But even if not performance bottle-necked, efficiency is certainly better (highly doubt it’d save more energy than the cost of just keeping an old device, but still).
My OnePlus 5T had an unfortunate encounter with a washing machine and tumble dryer, so now the cellular interface doesn’t work (everything else does). The 5T replaced a first-gen Moto G (which was working fine except that the external speaker didn’t work so I couldn’t hear it ring. I considered that a feature, but others disagreed). The Moto G was slow by the end. Drawing maps took a while, for example. The 5T was fine and I’d still be using it if I hadn’t thrown it in the wash. It has an 8-core CPU, 8 GiB of RAM, and an Adreno 540 GPU - that’s pretty good in comparison to the laptop that I was using until very recently.
I replaced the 5T with a 9 Pro. I honestly can’t tell the difference in performance for anything that I do. The 9 Pro is 4 years newer and doesn’t feel any faster for any of the apps or games that I run (and I used it a reasonable amount for work, with Teams, Word, and PowerPoint, which are not exactly light apps on any platform). Apparently the GPU is faster and the CPU has some faster cores but I rarely see anything that suggests that they’re heavily loaded.
Original comment mentioned iPhone 8 specifically. Android situation is completely different.
Apple had a significant performance lead for a while. Qualcomm just doesn’t seem to be interested in making high-end chips. They just keep promising that their next-year flagship will be almost as fast as Apple’s previous-year baseline. Additionally there are tons of budget Mediatek Androids that are awfully underpowered even when new.
Flagship Qualcomm chips for Android chips been fine for years & more than competitive once you factor in cost. I would doubt anyone is buying into either platform purely based on performance numbers anyhow versus ecosystem and/or wanting hardware options not offered by one or the other.
That’s what I’m saying — Qualcomm goes for large volumes of mid-range chips, and does not have products on the high end. They aren’t even trying.
BTW, I’m flabbergasted that Apple put M1 in iPads. What a waste of a powerful chip on baby software.
Uh, what about their series 8xx SoC’s? On paper they’re comparable to Apple’s A-series, it’s the software that usually is worse.
Still a massacre.
Yeah, true, I could have checked myself. Gap is even bigger right now than two years ago.
Q is in self-inflicted rut enabled by their CDMA stranglehold. Samsung is even further behind because their culture doesn’t let them execute.
https://cdn.arstechnica.net/wp-content/uploads/2022/09/iPhone-14-Geekbench-5-single-Android-980x735.jpeg
https://cdn.arstechnica.net/wp-content/uploads/2022/09/iPhone-14-Geekbench-Multi-Android-980x735.jpeg
Those are some cherry-picked comparisons. Apple release on a different cadence. You check right now, & S23 beats up on it as do most flagship now. If you blur the timing, it’s all about the same.
It would cost them more to develop and commission to fabrication of a more “appropriate” chip.
The high-end Qualcomm is fine. https://www.gsmarena.com/compare.php3?idPhone1=12082&idPhone3=11861&idPhone2=11521#diff- (may require viewing as a desktop site to see 3 columns)
With phones of the same tier released before & after you can see benchmarks are all close as is battery life. Features are wildly different tho since Android can offer a range of different hardware.
It doesn’t for laptops[1], so I doubt it would for smartphones either.
[1] https://www.lowtechmagazine.com/2020/12/how-and-why-i-stopped-buying-new-laptops.html
I think you’re really discounting the experiences of consumers to say they don’t notice the UI and UX changes made possible on the Android platform by improvements in hardware capabilities.
I notice that you’re not naming any. Elsewhere in the thread, I pointed out that I can’t tell the difference between a OnePlus 5T and a 9 Pro, in spite of them being years apart in releases. They can run the same version of Android and the UIs seem identical to me.
I didn’t think I had to. Android 9, 10, 11, 12 have distinct visual styles, and between vendors this distinction can further - this may be less apparent on OnePlus as they use their own OxygenOS (AOSP upstream ofc) (or at least, used to), but consumers notice even if they can’t clearly relate what they’ve noticed.
I’m using LimeageOS and both phones are running updated versions of the OS. Each version has made the settings app more awful but I can’t point to anything that’s a better UI or anything that requires newer hardware. Rendering the UI barely wakes up the GPU on the older phone. So what is new, better, and is enabled by newer hardware?
I can’t argue either way for “better”, I’m not the market. Newer hardware generally has better capability for graphics processing, leading to more reactive displays at higher refresh rates, and enabling compositing settings and features that otherwise wouldn’t run at an acceptable frame rate.
LineageOS is an AOSP build specifically designed to run fast and support legacy hardware, and is designed to look the same on all that hardware. It’s not a fair comparison to what people like to see with smartphone interfaces and launchers etc.
So please name one of them. A 2017 phone can happily run a 1080p display at a fast enough refresh that I’ve no idea what it is because it’s faster than my eyes can detect, with a full compositing UI. Mobile GPUs have been fast enough to composite every UI element from a separate texture, running complex pixel shaders on them, for ten years. OS X started doing this on laptops over 15 years ago, with integrated Intel graphics cards that are positively anaemic in comparison to anything in a vaguely recent phone. Android has provided a compositing UI toolkit from day one. Flutter, with its 60FPS default, runs very happily on a 2017 phone.
If it helps, I’m actually using the Microsoft launcher on both devices. But, again, you’re claiming that there are super magic UI features that are enabled by new hardware without saying what they are.
All innovation isn’t equal. Innovation that isn’t wanted by customers or their suppliers is malinvestment - a waste of human capacity, wealth, and time.
What makes you think that this innovation is not wanted by customers?
There is innovation that is wanted by customers, but manufacturers don’t provide it because it goes against their interest. I think it’s a lie invisible-hand-believers tell themselves when claiming that customers have a choice between a fixable phone and a glued-phone with an appstore. Of course customers will chose the glued-phone with an app store, because they want a usable phone first. But this doesn’t mean they don’t want a fixable phone, it means that they were given a Hobson’s choice
The light-bulb cartel is the single worst example you could give; incandescent light-bulbs are dirt-cheap to replace and burning them hotter ends up improving the quality of their light (i.e. color) dramatically, while saving more in reduced power bills than they cost from shorter lifetimes. This 30min video by Technology Connections covers the point really well.
Okay, that was sloppy of me.
“Not wanted more than any of the other features on offer.”
“Not wanted enough to motivate serious investment in a competitor.”
That last is most telling.
This cynical view is unwarranted in the case of EU, which so far is doing pretty well avoiding regulatory capture.
EU has a history of actually forcing companies to innovate in important areas that they themselves wouldn’t want to, like energy efficiency and ecological impact. And their regulations are generally set to start with realistic requirements, and are tightened gradually.
Not everything will sort itself out with consumers voting with their wallets. Sometimes degenerate behaviors (like vendor lock-in, planned obsolescence, DRM, spyware, bricking hardware when subscription for it expires) universally benefit companies, so all choices suck in one way or another. There are markets with high barriers to entry, especially in high-end electronics, and have rent-seeking incumbents that work for their shareholders’ interests, not consumers.
Ecodesign worked out wonderfully for vacuum cleaners, but that’s an appliance that hasn’t meaningfully changed since the 1930s. (You could argue that stick vacuum cleaners are different, but ecodesign certainly didn’t prevent them from entering the market)
The smartphone market has obviously been stagnating for a while, so it’ll be interesting to see if ecodesign can shake it up.
I strongly disagree here. They’ve changed massively since the ’90s. Walking around a vacuum cleaner shop in the ’90s, you had two choices of core designs. The vast majority had a bag that doubled as an air filter, pulling air through the bag and catching dust on the way. This is more or less the ’30s design (though those often had separate filters - there were quite a lot of refinements in the ’50s and ’60s - in the ’30s they were still selling ones that required a central compressor in the basement with pneumatic tubes that you plugged the vacuum cleaner into in each room).
Now, if you buy a vacuum cleaner, most of them use centrifugal airflow to precipitate heavy dust and hair, along with filters to catch the finer dust. Aside from the fact that both move air using electric motors, this is a totally different design to the ’30s models and to most of the early to mid ’90s models.
More recently, cheap and high-density lithium ion batteries have made cordless vacuums actually useful. These have been around since the ‘90s but they were pointless handheld things that barely functioned as a dustpan and brush replacement. Now they’re able to replace mains-powered ones for a lot of uses.
Oh, and that’s not even counting the various robot ones that can bounce around the floor unaided. These, ironically, are the ones whose vacuum-cleaner parts look the most like the ’30s design.
Just to add to that, the efficiency of most electrical home appliances has improved massively since the early ‘90s. With a few exceptions, like things based on resistive heating, which can’t improve much because of physics (but even some of those got replaced by devices with alternative heating methods) contemporary devices are a lot better in terms of energy efficiency. A lot of effort went into that, not only on the electrical end, but also on the mechanical end – vacuum cleaners today may look a lot like the ones in the 1930s but inside, from materials to filters, they’re very different. If you handed a contemporary vacuum cleaner to a service technician from the 1940s they wouldn’t know what to do with it.
Ironically enough, direct consumer demand has been a relatively modest driver of ecodesign, too – most consumers can’t and shouldn’t be expected to read power consumption graphs, the impact of one better device is spread across at least a two months’ worth of energy bills, and the impact of better electrical filtering trickles down onto consumers, so they’re not immediately aware of it. But they do know to look for energy classes or green markings or whatever.
The eco labelling for white goods was one of the inspirations for this law because it’s worked amazingly well. When it was first introduced, most devices were in the B-C classification or worse. It turned out that these were a very good nudge for consumers and people were willing to pay noticeably more for higher-rated devices, to the point that it became impossible to sell anything with less than an A rating. They were forced to recalibrate the scheme a year or two ago because most things were A+ or A++ rated.
It turns out that markets work very well if customers have choice and sufficient information to make an informed choice. Once the labelling was in place, consumers were able to make an informed choice and there was an incentive for vendors to provide better quality on an axis that was now visible to consumers and so provided choice. The market did the rest.
Labeling works well when there’s a somewhat simple thing to measure to get the rating of each device - for a fridge it’s power consumption. It gets trickier when there’s no easy way to determine which of two devices is “better” - what would we measure to put a rating on a mobile phone or a computer?
I suppose the main problem is that such devices are multi-purpose - do I value battery life over FLOPS, screen brightness over resolution, etc. Perhaps there could be a multi-dimensional rating system (A for battery life, D for gaming performance, B for office work, …), but that gets unpractical very quickly.
There’s some research by Zinaida Benenson (I don’t have the publication to hand, I saw the pre-publication results) on an earlier proposal for this law that looked at adding two labels:
The proposal was that there would be statutory fines for devices that did not comply with the SLA outlined in those two labels but companies are free to put as much or as little as they wanted. Her research looked at this across a few consumer good classes and used the standard methodology where users were shown a small number of devices with different specs and different things on these labels and then asked to pick their preference. This was then used to vary price, features, and security SLA. I can’t remember the exact numbers but she found that users consistently were willing to select higher priced things with better security guarantees, and favoured them over some other features.
All the information I’ve read points to centrifugal filters not being meaningfully more efficient or effective than filter bags, which is why these centrifugal cylones are often backed up by traditional filters. Despite what James Dyson would have us believe, building vacuum cleaners is not like designing a Tokamak. I’d use them as an example of a meaningless change introduced to give consumers an incentive to upgrade devices that otherwise last decades.
Stick (cordless) vacuums are meaningfully different in that the key cleaning mechanism is no longer suction force. The rotating brush provides most of the cleaning action, coupled with a (relatively) weak suction provided by the cordless engines. This makes them vastly more energy-efficient, although this is probably cancelled out by the higher impact of production, and the wear and tear on the components.
It also might be a great opportunity for innovation in modular design. Say, Apple is always very proude when they come up with a new design. Remember a 15 min mini-doc on their processes when they introduced unibody macbooks? Or 10 min video bragging about their laminated screens?
I don’t see why it can’t be about how they designed a clever back cover that can be opened without tools to replace the battery and also waterproof. Or how they came up with a new super fancy screen glass that can survive 45 drops.
Depending on how you define “progress” there can be a plenty of opportunities to innovate. Moreover, with better repairability there are more opportunities for modding. Isn’t it a “progress” if you can replace one of the cameras on your iPhone Pro with, say, infrared camera? Definitely not a mainstream feature to ever come to mass-produced iPhone but maybe a useful feature for some professionals. With available schematics this might have a chance to actually come to market. There’s no chance for it to ever come to a glued solid rectangle that rejects any part but the very specific it came with from the factory.
Phones have not made meaningful progress since the first few years of the iphone. Its about time
That’s one way to think about it. Another is that shaping markets is one of the primary jobs of the government, and a representative government – which, for all its faults, the EU is – delegates this job to politics. And folks make a political decision on the balance of equities differently, and … well, they decide how the markets should look. I don’t think that “innovation” or “efficiency” at providing what the market currently provides is anything like a dispositive argument.
thank god
There’s a chance that tech companies start to make EU-only hardware.
This overall shift will favor long term R&D investments of the kind placed before our last two decades of boom. It will improve innovation in the same way that making your kid eats vegetables improves their health. This is necessary soil for future booms.
What the hell does “manufacturers will have to make compatible software updates available for at least 5 years” mean? Who determines what bugs now legally need to be fixed on what schedule? What are the conditions under which this rule is considered to have been breached? What if the OS has added features that are technically possible on older devices but would chew up batteries too quickly because of missing accelerators found on newer devices? This is madness.
Took me all of 10 minutes to find the actual law instead of a summary. All of the questions you have asked have pretty satisfying answers there IMO.
From the “Operating system updates” section:
If you fix a bug or security issue, you have to backport it to all models you’ve offered for sale in the past 5 years. If your upstream (android) releases a security fix, you must release it within 4 months (part c of that section).
Part F says if your updates slow down old models, you have to restore them to good performance within “a reasonable time” (yuck, give us a number). An opt-in feature toggle that enables new features but slows down the device is permitted, which I suspect is how your last question would be handled in practice.
That’s going to cause a lot of fly-by-night vendors to abandon the market I imagine :)
I wonder how they determine what is “the same operating system”
Good. The world is much better off without them.
[Comment removed by author]