Looks like the beginning of the end for the unnecessarey e-waste provoked by companies forcing obselence and anti-consumer patterns made possible by the lack of regulations.
It’s amazing that no matter how good the news is about a regulation you’ll always be able to find someone to complain about how it harms some hypothetical innovation.
Sure. Possibly that too - although I’d be mildly surprised if the legislation actually delivers the intended upside, as opposed to just delivering unintended consequences.
And just to be clear: the unintended consequences here include the retardation of an industry that’s delivered us progress from 8 bit micros with 64KiB RAM to pervasive Internet and pocket supercomputers in one generation.
Edited to add: I run a refurbished W540 with Linux Mint as a “gaming” laptop, a refurbished T470s with FreeBSD as my daily driver, a refurbished Pixel 3 with Lineage as my phone, and a PineTime and Pine Buds Pro. I really do grok the issues with the industry around planned obsolescence, waste, and consumer hostility.
I just still don’t think the cost of regulation is worth it.
I’m a EU citizen, and I see this argument made every single time the EU passes a new legislation affecting tech. So far, those worries never materialized.
I just can’t see why having removeable batteries would hinder innovation. Each company will still want to sell their prducts, so they will be pressed to find creative ways to have a sleek design while meeting regulations.
Do you think Apple engineers are not capeable of designing AirPods that have a removeable battery? The battery is even in the stem, so it could be as simple as having the stem be de-tacheable. It was just simpler to super-glue everything shut, plus it comes with the benefit of forcing consumers to upgrade once their AirPods have unusable battery life.
Also, if I’m not mistaken it is about service-time replaceable battery, not “drop-on-the-floor-and-your-phone-is-in-6-parts” replaceable as in the old times.
In the specific case of batteries, yep, you’re right. The legislation actually carves special exception for batteries that’s even more manufacturer-friendly than other requirements – you can make devices with batteries that can only be replaced in a workshop environment or by a person with basic repair training, or even restrict access to batteries to authorised partners. But you have to meet some battery quality criteria and a plausible commercial reason for restricting battery replacement or access to batteries (e.g. an IP42 or, respectively, IP67 rating).
Yes, I know, what about the extra regulatory burden: said battery quality criteria are just industry-standard rating methods (remaining capacity after 500 and 1,000 cycles) which battery suppliers already provide, so manufacturers that currently apply the CE rating don’t actually need to do anything new to be compliant. In fact the vast majority of devices on the EU market are already compliant, if anyone isn’t they really got tricked by whoever’s selling them the batteries.
The only additional requirements set in place is that fasteners have to be resupplied or reusable. Most fasteners that also perform electrical functions are inherently reusable (on account of being metallic) so in practice that just means, if your batteries are fastened with adhesive, you have to provide that (or a compatible) adhesive for the prescribed duration. As long as you keep making devices with adhesive-fastened batteries that’s basically free.
i.e. none of this requires any innovation of any kind – in fact the vast majority of companies active on the EU market can keep on doing exactly what they’re doing now modulo exclusive supply contracts (which they can actually keep if they want to, but then they have to provide the parts to authorised repair partners).
Man do I ever miss those days though. Device not powering off the way I’m telling it to? Can’t figure out how to get this alarm app to stop making noise in this crowded room? Fine - rip the battery cover off and forcibly end the noise. 100% success rate.
You’re enjoying those ubiquitous “This site uses cookies” pop-ups, then?
Do you think Apple engineers are not capeable of designing AirPods that have a removeable battery?
Of course they’re capable, but there are always trade-offs. I am very skeptical that something as tiny and densely packed as an AirPod could be made with removeable parts without becoming a lot less durable or reliable, and/or more expensive. Do you have the hardware/manufacturing expertise to back up your assumptions?
I don’t know where the battery is in an AirPod, but I do know that lithium-polymer batteries can be molded into arbitrary shapes and are often designed to fill the space around the other components, which tends to make them difficult or impossible to remove.
You’re enjoying those ubiquitous “This site uses cookies” pop-ups, then?
Those aren’t required by law; those happen when a company makes customer-hostile decisions and wants to deflect the blame to the EU for forcing them to be transparent about their bad decisions.
Huh? Using cookies is “user-hostile”? I mean, I actually remember using the web before cookies were a thing, and that was pretty user-unfriendly: all state had to be kept in the URL, and if you hit the Back button it reversed any state, like what you had in your shopping cart.
I can’t believe so many years later people still believe the cookie law applies to all cookies.
Please educate yourself: the law explicitly applies only to cookies used for tracking and marketing purposes, not for funcional purposes.
The law also specified that the banner must have a single button to “reject all cookies”, so any website that ask you to go trought a complex flow to reject your consent is not compliant.
It requires consent for all but “strictly necessary” cookies. According to the definitions on that page, that covers a lot more than tracking and marketing. For example “ choices you have made in the past, like what language you prefer”, or “statistics cookies” whose “ sole purpose is to improve website function”. Definitely overreach.
FWIW this regulation doesn’t apply to the Airpods. But if for some reason it ever did, and based on the teardown here, the main obstacle for compliance is that the battery is behind a membrane that would need to be destroyed. A replaceable fastener that would allow it to be vertically extracted, for example, would allow for cheap compliance. If Apple got their shit together and got a waterproof rating, I think they could actually claim compliance without doing anything else – it looks like the battery is already replaceable in a workshop environment (someone’s done it here) and you can still do that.
(But do note that I’m basing this off pictures, I never had a pair of AirPods – frankly I never understood their appeal)
Sure, Apple is capable of doing it. And unlike my PinePhone the result would be a working phone ;)
But the issue isn’t a technical one. It’s the costs involved in finding those creative ways, to hiring people to ensure compliance, and especially to new entrants to the field.
It’s demonstrably untrue that the costs never materialise. Speak to business owners about the cost of regulatory compliance sometime. Red tape is expensive.
Is that a trick question? The alternative is not regulating, and it’s delivered absolutely stunning results so far. Again: airgapped 8 bit desk toys to pocket supercomputers with pervasive Internet in a generation.
Edited to add: and this isn’t a new problem they’re dealing with; Apple has been pulling various customer-hostile shit moves since Jobs’ influence outgrew Woz’s:
But once again, Steve Jobs objected, because he didn’t like the idea of customers mucking with the innards of their computer. He would also rather have them buy a new 512K Mac instead of them buying more RAM from a third-party.
Edited to add, again: I mean this without snark, coming from a country (Australia) that despite its larrikin reuptation is astoundingly fond of red tape, regulation, conformity, and conservatism. But I think there’s a reason Silicon Valley is in America, and not either Europe or Australasia, and it’s cultural as much as it’s economic.
It’s not just that. Lots of people have studied this and one of the key reasons is that the USA has a large set of people with disposable income that all speaks the same language. There was a huge amount of tech innovation in the UK in the ’80s and ’90s (contemporaries of Apple, Microsoft, and so on) but very few companies made it to international success because their US competitors could sell to a market (at least) five times the size before they needed to deal with export rules or localisation. Most of these companies either went under because US companies had larger economies of scale or were bought by US companies.
The EU has a larger middle class than the USA now, I believe, but they speak over a dozen languages and expect products to be translated into their own locales. A French company doesn’t have to deal with export regulations to sell in Germany, but they do need to make sure that they translate everything (including things like changing decimal separators). And then, if they want to sell in Spain, they need to do all of that again. This might change in the next decade, since LLM-driven machine translation is starting to be actually usable (helped for the EU by the fact that the EU Parliament proceedings are professionally translated into all member states’ languages, giving a fantastic training corpus).
The thing that should worry American Exceptionalists is that the middle class in China is now about as large as the population of America and they all read the same language. A Chinese company has a much bigger advantage than a US company in this regard. They can sell to at least twice as many people with disposable income without dealing with export rules or localisation than a US company.
That’s one of the reasons but it’s clearly not sufficient. Other countries have spent up on taxpayer’s purse and not spawned a silicon valley of their own.
But they failed basically because of the Economic Calculation Problem - even with good funding and smart people, they couldn’t manufacture worth a damn.
Money - wherever it comes from - is an obvious prerequisite. But it’s not sufficient - you need a (somewhat at least) free economy and a consequently functional manufacturing capacity. And a culture that rewards, not kills or jails, intellectual independence.
Silicon Valley was born through the intersection of several contributing factors, including a skilled science research base housed in area universities, plentiful venture capital, permissive government regulation, and steady U.S. Department of Defense spending.
Government spending tends to help with these kind of things. As it did for the foundations of the Internet itself. Attributing most of the progress we had so far to lack of regulation is… unwarranted at best.
Besides, it’s not like anyone is advocating we go back in time and regulate the industry to prevent current problems without current insight. We have specific problems now that we could easily regulate without imposing too much a cost on manufacturers: there’s a battery? It must be replaceable by the end user. Device pairing prevents third party repairs? Just ban it. Or maybe keep it, but provide the tools to re-pair any new component. They’re using proprietary connectors? Consider standardising it all to USB-C or similar. It’s a game of whack-a-mole, but at least this way we don’t over-regulate.
Beware comrade, folks will come here to make a slippery slope arguments about how requiring battery replacements & other minor guard rails towards consumer-forward, e-waste-reducing design will lead to the regulation of everything & fully stifle all technological progress.
What I’d be more concerned is how those cabals weaponize the legislation in their favor by setting and/or creating the standards. I look at how the EU is saying all these chat apps need to quit that proprietary, non-cross-chatter behavior. Instead of reverting their code to the XMPP of yore, which is controlled by a third-party committee/community, that many of their chats were were designed after, they want to create a new standard together & will likely find a way to hit the minimum legal requirements while still keeping a majority of their service within the garden or only allow other big corporate players to adapt/use their protocol with a 2000-page specification with bugs, inconsistencies, & unspecified behavior.
’s a game of whack-a-mole, but at least this way we don’t over-regulate.
Whack enough moles and over-regulation is exactly what you get - a smothering weight of decades of incremental regulation that no-one fully comprehends.
One of the reason the tech industry can move as fast as it does is that it hasn’t yet had the time to accumulate this - or the endless procession of grifting consultants and unions that burden other industries.
It isn’t exactly what you get. You’re not here complaining about the fact that your mobile phone electrocutes you or gives you RF burns of stops your TV reception - because you don’t realise that there is already lots of regulation from which you benefit. This is just a bit more, not the straw-man binary you’re making out it to be.
I am curious however: do you see the current situation as tenable? You mention above that there are anti-consumerist practices and the like, but also express concern that regulation will quickly slippery slope away, but I am curious if you think the current system where there is more and more lock in both on the web and in devices can be pried back from those parties?
The alternative is not regulating, and it’s delivered absolutely stunning results so far.
Why are those results stunning? Is there any reason to think that those improvements were difficult in the first place?
There are a lot of economic incentives, and it was a new field of science application, that has benefited from so many other fields exploding at the same time.
It’s definitely not enough to attribute those results to the lack of regulation. The “utility function” might have just been especially ripe for optimization in that specific local area, with or without regulations.
Now, we see monopolies appearing again and associated anti-consumer decisions to the benefit of the bigger players. This situation is well-known – tragedy of the common situations in markets is never fixed by the players themselves.
Your alternative of not doing anything hinges on the hope that your ideologically biased opinion won’t clash with reality. It’s naive to believe corporations not to attempt to maximize their profits when they have an opportunity.
Is that a trick question? The alternative is not regulating, and it’s delivered absolutely stunning results so far. Again: airgapped 8 bit desk toys to pocket supercomputers with pervasive Internet in a generation.
This did not happen without regulation. The FCC exists for instance. All of the actual technological development was funded by the government, if not conducted directly by government agencies.
As a customer, I react to this by never voluntarily buying Apple products. And I did buy a Framework laptop when it first became available, which I still use. Regulations that help entrench Apple and make it harder for new companies like Framework to get started are bad for me and what I care about with consumer technology (note that Framework started in the US, rather than the EU, and that in general Europeans immigrate to the US to start technology companies rather than Americans immigrating to the EU to do the same).
As a customer, I react to this by never voluntarily buying Apple products.
Which is reasonable. Earlier albertorestifo spoke about legislation “forc[ing] their hand” which is a fair summary - it’s the use of force instead of voluntary association.
(Although I’d argue that anti-circumvention laws, etc. prescribing what owners can’t do with their devices is equally wrong, and should also not be a thing).
The problem with voluntary association is that most people don’t know what they’re associating with when they buy a new product. Or they think short term, only to cry later when repairing their device is more expensive than buying a new one.
There’s a similar tension at play with GitHub’s rollout of mandatory 2FA: it really annoys me, adding TOTP didn’t improve my security by one iota (I already use KeepassXC), but many people do use insecure passwords, and you can’t tell by looking at their code. (In this analogy GitHub plays the role of the regulator.)
The problem with voluntary association is that most people don’t know what they’re associating with when they buy a new product.
I mean, you’re not wrong. But don’t you feel like the solution isn’t to infantilise people by treating them like they’re incapable of knowing?
For what it’s worth I fully support legislation enforcing “plain $LANGUAGE” contracts. Fraud is a species of violence; people should understand what they’re signing.
But by the same token, if people don’t care to research the repair costs of their devices before buying them … why is that a problem that requires legislation?
But don’t you feel like the solution isn’t to infantilise people by treating them like they’re incapable of knowing?
They’re not, if we give them access to the information, and there are alternatives. If all the major phone manufacturers produce locked down phones with impossible to swap components (pairing), that are supported only for 1 year, what’s people to do? If people have no idea how secure the authentication of someone is on GitHub, how can they make an informed decision about security?
But by the same token, if people don’t care to research the repair costs of their devices before buying them
When important stuff like that is prominently displayed on the package, it does influence purchase decisions. So people do care. But more importantly, a bad score on that front makes manufacturers look bad enough that they would quickly change course and sell stuff that’s easier to repair, effectively giving people more choice. So yeah, a bit of legislation is warranted in my opinion.
But the issue isn’t a technical one. It’s the costs involved in finding those creative ways, to hiring people to ensure compliance, and especially to new entrants to the field.
I’m not a business owner in this field but I did work at the engineering (and then product management, for my sins) end of it for years. I can tell you that, at least back in 2016, when I last did any kind of electronics design:
Ensuring “additional” compliance is often a one-time cost. As an EE, you’re supposed to know these things and keep up with them, you don’t come up with a schematic like they taught you in school twenty years ago and hand it over to a compliance consultant to make it deployable today. If there’s a major regulatory change you maybe have to hire a consultant once. More often than not you already have one or more compliance consultants on your payroll, who know their way around these regulations long before they’re ratified (there’s a long adoption process), so it doesn’t really involve huge costs. The additional compliance testing required in this bill is pretty slim and much of it is on the mechanical side. That is definitely not one-time but trivially self-certifiable, and much of the testing time will likely be cut by having some of it done on the supplier end (for displays, case materials etc.) – where this kind of testing is already done, on a much wider scale and with a lot more parameters, so most partners will likely cover it cost-free 12 months from now (and in the next couple of weeks if you hurry), and in the meantime, they’ll do it for a nominal “not in the statement of work” fee that, unless you’re just rebranding OEM products, is already present on a dozen other requirements, too.
An embarrassing proportion of my job consisted not of finding creative ways to fit a removable battery, but in finding creative ways to keep a fixed battery in place while still ensuring adequate cooling and the like, and then in finding even more creative ways to design (and figure out the technological flow, help write the servicing manual, and help estimate logistics for) a device that had to be both testable and impossible to take apart. Designing and manufacturing unrepairable, logistically-restricted devices is very expensive, too, it’s just easier for companies to hide its costs because the general public doesn’t really understand how electronics are manufactured and what you have to do to get them to a shop near them.
The intrinsic difficulty of coming up with a good design isn’t a major barrier of entry for new players any more than it is for anyone. Rather, most of them can’t materialise radically better designs because they don’t have access to good suppliers and good manufacturing facilities – they lack contacts, and established suppliers and manufacturers are squirrely about working with them because they aren’t going to waste time on companies that are here today and they’re gone tomorrow. When I worked on regulated designs (e.g. medical) that had long-term support demands, that actually oiled some squeaky doors on the supply side, as third-party suppliers are equally happy selling parts to manufacturers or authorised servicing partners.
Execs will throw their hands in the air and declare anything super-expensive, especially if it requires them to put managers to work. They aren’t always wrong but in this particular case IMHO they are. The additional design-time costs this bill imposes are trivial, and at least some of them can be offset by costs you save elsewhere on the manufacturing chain. Also, well-ran marketing and logistics departments can turn many of its extra requirements into real opportunities.
I don’t want any of these things more than I want improved waterproofing. Why should every EU citizen that has the same priorities I do not be able to buy a the device they want?
The law doesn’t prohibit waterproof devices. In fact, it makes clear expections for such cases. It mandates that the battery must be repleaceable without specialized tools and by any competent shop, it doesn’t mandate a user-replaceable battery.
And just to be clear: the unintended consequences here include the retardation of an industry that’s delivered us progress from 8 bit micros with 64KiB RAM to pervasive Internet and pocket supercomputers in one generation.
I don’t want to defend the bill (I’m skeptical of politicians making decisions on… just about anything, given how they operate) but I don’t think recourse to history is entirely justified in this case.
For one thing, good repairability and support for most of (if not throughout) a device’s useful lifetime was the norm for a good part of that period, and it wasn’t a hardware-only deal. Windows 3.1 was supported until 2001, almost twice longer than the bill demands. NT 3.1 was supported for seven years, and Windows 95 was supported for 6. IRIX versions were supported for 5 (or 7?) years, IIRC.
For another, the current state of affairs is the exact opposite of what deregulation was supposed to achieve, so I find it equally indefensible on (de)regulatory grounds alone. Manufacturers are increasingly convincing users to upgrade not by delivering better and more capable products, but by making them both less durable and harder to repair, and by restricting access to security updates. Instead of allowing businesses to focus on their customers’ needs rather than state-mandated demands, it’s allowing businesses to compensate their inability to meet customer expectations (in terms of device lifetime and justified update threshold) by delivering worse designs.
I’m not against that on principle but I’m also not a fan of footing the bill for all the extra waste collection effort and all the health hazards that generates. Private companies should be more than well aware that there’s no such thing as a free lunch.
For one thing, good repairability and support for most of (if not throughout) a device’s useful lifetime was the norm for a good part of that period
Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.
For another, the current state of affairs is the exact opposite of what deregulation was supposed to achieve
Deregulation is the “ground state”.
It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.
Conveniently, it fosters an environment of rapid growth in wealth, capability, and efficiency. Because when companies do what you’re suggesting - nerfing their products to drive revenue - customers go elsewhere.
Which is why you’ll see the greatest proponents of regulation are the companies themselves, these days. Anti-circumvention laws, censorship laws that are only workable by large companies, Government-mandated software (e.g. Korean banking, Android and iOS only identity apps in Australia) and so forth are regulation aimed against customers.
So there’s a part of me that thinks companies are reaping what they sowed, here. But two wrongs don’t make a right; the correct answer is to deregulate both ends.
Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.
Maybe. Most early home computers were expensive. People expected them to last a long time. In the late ’80s, most of the computers that friends of mine owned were several years old and lasted for years. The BBC Model B was introduced in 1981 and was still being sold in the early ‘90s. Schools were gradually phasing them out. Things like the Commodore 64 of Sinclair Spectrum had similar longevity. There were outliers but most of them were from companies that went out of business and so wouldn’t be affected by this kind of regulation.
It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.
That’s not really true. It assumes a balance of power that is exactly equal between companies and consumers.
Companies force people to upgrade by tying in services to the device and then dropping support in the services for older products. No one buys a phone because they want a shiny bit of plastic with a thinking rock inside, they buy a phone to be able to run programs that accomplish specific things. If you can’t safely connect the device to the Internet and it won’t run the latest apps (which are required to connect to specific services) because the OS is out of date, then they need to upgrade the OS. If they can’t upgrade the OS because the vendor doesn’t provide an upgrade and no one else can because they have locked down the bootloader (and / or not documented any of the device interfaces), then consumers have no choice to upgrade.
Conveniently, it fosters an environment of rapid growth in wealth, capability, and efficiency. Because when companies do what you’re suggesting - nerfing their products to drive revenue - customers go elsewhere.
Only if there’s another option. Apple controls their app store and so gets a 30% cut of app revenue. This gives them some incentive to support old devices, because they can still make money from them, but they will look carefully at the inflection point where they make more money from upgrades than from sales to older devices. For other vendors, Google makes money from the app store and they don’t[1] and so once a handset has shipped, the vendor has made as much money as they possibly can. If a vendor makes a phone that gets updates longer, then it will cost more. Customers don’t see that at point of sale, so they don’t buy it. I haven’t read the final version of this law, one of the drafts required labelling the support lifetime (which research has shown will have a big impact - it has a surprisingly large impact on purchasing decisions). By moving the baseline up for everyone, companies don’t lose out by being the one vendor to try to do better.
Economists have studied this kind of market failure for a long time and no one who actually does research in economics (i.e. making predictions and trying to falsify them, not going on talk shows) has seriously proposed deregulation as the solution for decades.
Economies are complex systems. Even Adam Smith didn’t think that a model with a complete lack of regulation would lead to the best outcomes.
[1] Some years ago, the Android security team was complaining about the difficulties of support across vendors. I suggested that Google could fix the incentives in their ecosystem by providing a 5% cut of all app sales to the handset maker, conditional on the phone running the latest version of Android. They didn’t want to do that because Google maximising revenue is more important than security for users.
Economists have studied this kind of market failure for a long time and no one who actually does research in economics (i.e. making predictions and trying to falsify them, not going on talk shows) has seriously proposed deregulation as the solution for decades.
Is the school of economics you’re talking about actual experimenters, or are they arm-chair philosophers? I trust they propose what you say they propose, but what actual evidence do they have?
I might sound like I’m dismissing an entire scientific discipline, but economics have shown strong signs of being extremely problematic on this front for a long time. One big red flag for instance is the existence of such long lived “schools”, which are a sign of dogma more than they’re a sign of sincere inquiry.
In fact, they dismiss the entire concept of market failure, because markets exist to provide pricing and a means of exchange, nothing more.
Assuming there’s no major misunderstanding, there’s another red flag right there: markets have a purpose now? Describing what markets do is one thing, but ascribing purpose to them presupposes some sentient entity put them there with intent. Which may very well be true, but then I would ask a historian, not an economist.
Now looking at the actual purpose… the second people exchange stuff for a price, there’s a pricing and a means of exchange. Those are the conditions for a market. Turning it around and making them the “purpose” of market is cheating: in effect, this is saying markets can’t fail by definition, which is quite unhelpful.
I might sound like I’m dismissing an entire scientific discipline, but economics have shown strong signs of being extremely problematic on this front for a long time.
This is why I specifically said practicing economists who make predictions. If you actually talk to people who do research in this area, you’ll find that they’re a very evidence-driven social science. The people at the top of the field are making falsifiable predictions based on models and refining their models when they’re wrong.
Economics is intrinsically linked to politics and philosophy. Economic models are like any other model: they predict what will happen if you change nothing or change something, so that you can see whether that fits with your desired outcomes. This is why it’s so often linked to politics and philosophy: Philosophy and politics define policy goals, economics lets you reason about whether particular actions (or inactions) will help you reach those goals. Mechanics is linked to engineering in the same way. Mechanics tells you whether a set of materials arranged in a particular way will be stable, engineering says ‘okay, we want to build a bridge’ and then uses models from mechanics to determine whether the bridge will fall down. In both cases, measurement errors or invalid assumptions can result in the goals not being met when the models say that they should be and in both cases these lead to refinements of the models.
One big red flag for instance is the existence of such long lived “schools”, which are a sign of dogma more than they’re a sign of sincere inquiry.
To people working in the field, the schools are just shorthand ways of describing a set of tools that you can use in various contexts.
Unfortunately, most of the time you hear about economics, it’s not from economists, it’s from people who play economists on TV. The likes of the Cato and Mises institutes in the article, for example, work exactly the wrong way around: they decide what policies they want to see applied and then try to tweak their models to justify those policies, rather than looking at what goals they want to see achieved and using the models to work out what policies will achieve those goals.
I really would recommend talking to economists, they tend to be very interesting people. And they hate the TV economists with a passion that I’ve rarely seen anywhere else.
Assuming there’s no major misunderstanding, there’s another red flag right there: markets have a purpose now?
Markets absolutely have a purpose. It is always a policy decision whether to allow a market to exist. Markets are a tool that you can use to optimise production to meet demand in various ways. You can avoid markets entirely in a planned economy (but please don’t, the Great Leap Forward or the early days of the USSR give you a good idea of how many people will die if you do). Something that starts as a market can end up not functioning as a market if there’s a significant power imbalance between producers and consumers.
Markets are one of the most effective tools that we have for optimising production for requirements. Precisely what they will optimise for depends a lot on the shape of the market and that’s something that you can control with regulation. The EU labelling rules on energy efficiency are a great example here. The EU mandated that white goods carry labels showing the score that they got on energy-efficiency tests. The labelling added information to customer and influenced their purchasing decisions. This created demand for more energy-efficient goods and the market responded by providing them. The eventually regulations banned goods below a certain efficiency rating but it was largely unnecessary because the market adjusted and most things were A rated or above when F ratings were introduced. It worked so well that they had to recalibrate the scale.
Unfortunately, most of the time you hear about economics, it’s not from economists, it’s from people who play economists on TV
I can see how such usurpation could distort my view.
Markets absolutely have a purpose. It is always a policy decision whether to allow a market to exist.
Well… yeah.
Precisely what [markets] will optimise for depends a lot on the shape of the market and that’s something that you can control with regulation. The EU labelling rules on energy efficiency are a great example here.
I love this example. Plainly shows that often people don’t make the choices they do because they don’t care about such and such criterion, they do so because they just can’t measure the criterion even if they cared. Even a Libertarian should admit that making good purchase decisions requires being well informed.
You can avoid markets entirely in a planned economy (but please don’t, the Great Leap Forward or the early days of the USSR give you a good idea of how many people will die if you do).
To be honest I do believe some select parts of the economy should be either centrally planned or have a state provider that can serve everyone: roads, trains, water, electricity, schools… Yet at the same time, other sectors probably benefit more from a Libertarian approach. My favourite example is the Internet: the fibre should be installed by public instances (town, county, state…), and bandwidth rented at a flat rate — no discount for bigger volumes. And then you just let private operators rent the bandwidth however they please, and compete among each other. The observed results in the few places in France that followed this plan (mostly rural areas big private providers didn’t want to invest in) was a myriad of operators of all sizes, including for-profit and non-profit ones (recalling what Benjamin Bayart said of the top of my head). This gave people an actual choice, and this diversity inherently makes this corner of the internet less controllable and freer.
A Libertarian market on top of a Communist infrastructure. I suspect we can find analogues in many other domains.
My favourite example is the Internet: the fibre should be installed by public instances (town, county, state…), and bandwidth rented at a flat rate — no discount for bigger volumes. And then you just let private operators rent the bandwidth however they please, and compete among each other.
This is great initially, but it’s not clear how you pay for upgrades. Presumably 1 Gb/s fibre is fine now, but at some point you’re going to want to migrate everyone to 10 Gb/s or faster, just as you wanted to upgrade from copper to fibre. That’s going to be capital investment. Does it come from general taxation or from revenue raised on the operators? If it’s the former, how do you ensure it’s equitable, if it’s the latter then you’re going to want to amortise the cost across a decade and so pricing sufficiently that you can both maintain the current infrastructure and save enough to upgrade to as-yet-unknown future technology can be tricky.
The problem with private ownership of utilities is that it encourages rent seeking and cutting costs at the expense of service and capital investment. The problem with public ownership is that it’s hard to incentivise efficiency improvements. It’s important to understand the failure modes of both options and ideally design hybrids that avoid the worst problems of both. The problem is that most politicians start with ‘privatisation is good’ or ‘privatisation is bad’ as an ideological view and not ‘good service, without discrimination, at an affordable price is good’ and then try to figure out how to achieve it.
Yes, that’s the point: the most capitalistic something is (extreme example: nuclear power plants), the more difficult private enterprises will want to invest in it, and if they do, the more they will want to extract rent from their investment. There’s also the thing about fibre (or copper) being naturally monopolistic, at least if you have a mind to conserve resources and not duplicate lines all over the place.
So there is a point where people must want the thing badly enough that the town/county/state does the investment itself. As it does for any public infrastructure.
Not saying this would be easy though. The difficulties you foresee are spot on.
The problem with public ownership is that it’s hard to incentivise efficiency improvements.
Ah, I see. Part of this can be solved by making sure the public part is stable, and the private part easy to invest on. For instance, we need boxes and transmitters and whatnot to lighten up the fibre. I speculate that those boxes are more liable to be improved than the fibre itself, so perhaps we could give them to private interests. But this is reaching the limits of my knowledge of the subject, I’m not informed enough to have an opinion on where the public/private frontier is best placed.
The problem is that most politicians start with ‘privatisation is good’ or ‘privatisation is bad’ as an ideological view and not ‘good service, without discrimination, at an affordable price is good’ and then try to figure out how to achieve it.
Yes, that’s the point: the most capitalistic something is (extreme example: nuclear power plants), the more difficult private enterprises will want to invest in it, and if they do, the more they will want to extract rent from their investment
There’s a lot of nuance here. Private enterprise is quite good at high-risk investments in general (nuclear power less so because it’s regulated such that you can’t just go bankrupt and walk away, for good reasons). A lot of interesting infrastructure were possible because private investors gambled and a lot of them lost a big pile of money. For example, the Iridium satellite phone network cost a lot to deliver and did not recoup costs. The initial investors lost money, but then the infrastructure was for sale at a bargain price and so it ended up being operated successfully. It’s not clear to me how public investment could have matched that (without just throwing away tax payers’ money).
This was the idea behind some of the public-private partnership things that the UK government pushed in the ‘90s (which often didn’t work, you can read a lot of detailed analyses of why not if you search for them): you allow the private sector to take the risk and they get a chunk of the rewards if the risk pays off but the public sector doesn’t lose out if the risk fails. For example, you get a private company to build a building that you will lease from them. They pay all of the costs. If you don’t need the building in five years time then it’s their responsibility to find another tenant. If the building needs unexpected repairs, they pay for them. If everything goes according to plan, you pay a bit more for the building space than if you’d built, owned, and operated it yourself. And you open it out to competitive bids, so if someone can deliver at a lower cost than you could, you save money.
Some procurement processes have added variations on this where the contract goes to the second lowest bidder or they the winner gets paid what the next-lowest bidder asked for. The former disincentivises stupidly low bids (if you’re lower than everyone else, you don’t win), the latter ensures that you get paid as much as someone else thought they could deliver, reducing risk to the buyer. There are a lot of variations on this that are differently effective and some economists have put a lot of effort into studying them. Their insights, sadly, are rarely used.
So there is a point where people must want the thing badly enough that the town/county/state does the investment itself. As it does for any public infrastructure.
The dangerous potholes throughout UK roads might warn you that this doesn’t always work.
A lot of interesting infrastructure were possible because private investors gambled and a lot of them lost a big pile of money.
Good point. We need to make sure that these gambles stay gambles, and not, say, save the people who made the bad choice. Save their company perhaps, but seize it in the process. We don’t want to share losses while keeping profits private — which is what happens more often than I’d like.
This was the idea behind some of the public-private partnership things that the UK government pushed in the ‘90s (which often didn’t work, you can read a lot of detailed analyses of why not if you search for them)
The intent is good indeed, and I do have an example of a failure in mind: water management in France. Much of it is under a private-public partnership, with Veolia I believe, and… well there are a lot of leaks, a crapton of water is wasted (up to 25% in some of the worst cases), and Veolia seems to be making little more than a token effort to fix the damn leaks. Probably because they don’t really pay for the loss.
The dangerous potholes throughout UK roads might warn you that this doesn’t always work.
It’s often a matter oh how much money you want to put in. Public French roads are quite good, even if we exclude the super highways (those are mostly privatised, and I reckon in even better shape). Still, point taken.
The EU labelling rules on energy efficiency are a great example here. The EU mandated that white goods carry labels showing the score that they got on energy-efficiency tests. The labelling added information to customer and influenced their purchasing decisions. This created demand for more energy-efficient goods and the market responded by providing them.
Were they actually successful, or did they only decrease operating energy use? You can make a device that uses less power because it lasts half as long before it breaks, but then you have to spend twice as much power manufacturing the things because they only last half as long.
I don’t disagree with your comment, by the way. Although, part of the problem with planned economies was that they just didn’t have the processing power to manage the entire economy; modern computers might make a significant difference, the only way to really find out would be to set up a Great Leap Forward in the 21st century.
Were they actually successful, or did they only decrease operating energy use?
I may be misunderstanding your question but energy ratings aren’t based on energy consumption across the device’s entire lifetime, they’re based on energy consumption over a cycle of operation of limited duration, or a set of cycles of operations of limited duration (e.g. a number of hours of functioning at peak luminance for displays, a washing-drying cycle for washer-driers etc.). You can’t get a better rating by making a device that lasts half as long.
Energy ratings and device lifetimes aren’t generally linked by any causal relation. There are studies that suggest the average lifetime for (at least some categories of) household appliances have been decreasing in the last decades, but they show about the same thing regardless of jurisdiction (i.e. even those without labeling or energy efficiency rules, or with different labeling rules) and it’s a trend that started prior to energy efficiency labeling legislation in the EU.
You can’t get a better rating by making a device that lasts half as long.
Not directly, but you can e.g. make moving parts lighter/thinner, so they take less power to move but break sooner as a result of them being thinner.
but they show about the same thing regardless of jurisdiction (i.e. even those without labeling or energy efficiency rules, or with different labeling rules) and it’s a trend that started prior to energy efficiency labeling legislation in the EU.
Not directly, but you can e.g. make moving parts lighter/thinner, so they take less power to move but break sooner as a result of them being thinner.
For household appliances, energy ratings are given based on performance under full rated capacities. Moving parts account for a tiny fraction of that in washing machines and washer-driers, and for a very small proportion of the total operating power in dishwashers and refrigerators (and obviously no proportion for electronic displays and lighting sources). They’re also given based on measurements of KWh/cycle rounded to three decimal places.
I’m not saying making some parts lighter doesn’t have an effect for some of the appliances that get energy ratings, but that effect is so close to the rounding error that I doubt anyone is going to risk their warranty figures for it. Lighter parts aren’t necessarily less durable, so if someone’s trying to get a desired rating by lightening the nominal load, they can usually get the same MTTF with slightly better materials, and they’ll gladly swallow some (often all) of the upfront cost just to avoid dealing with added uncertainty of warranty stocks.
Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.
The major problem with orphans was lack of access to proprietary parts – they were otherwise very repairable. The few manufacturers that can afford proprietary parts today (e.g. Apple) aren’t exactly at risk of going under, which is why that fear is all but gone today.
I have like half a dozen orphan boxes in my collection. Some of them were never sold on Western markets, I’m talking things like devices sold only on the Japanese market for a few years or Soviet ZX Spectrum clones. All of them are repairable even today, some of them even with original parts (except, of course, for the proprietary ones, which aren’t manufactured anymore so you can only get them from existing stocks, or use clone parts). It’s pretty ridiculous that I can repair thirty year-old hardware just fine but if my Macbook croaks, I’m good for a new one, and not because I don’t have (access to) equipment but because I can’t get the parts, and not because they’re not manufactured anymore but because no one will sell them to me.
It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.
Deregulation was certainly meant to achieve a lot of things in particular. Not just general outcomes, like a more competitive landscape and the like – every major piece of deregulatory legilslation has had concrete goals that it sought to achieve. Most of them actually achieved them in the short run – it was conserving these achievements that turned out to be more problematic.
As for companies not being able to force customers not to upgrade, repair or tinker with their devices, that is really not true. Companies absolutely can and do force customers to not upgrade or repair their devices. For example, they regularly use exclusive supply deals to ensure that customers can’t get the parts they need for it, which they can do without leveraging any government-mandated regulation.
Some of their means are regulation-based – e.g. they take them customers or third-parties to court (see e.g. Apple. For most devices, tinkering with them in unsupported ways is against the ToS, too, and while there’s always doubt on how much of that is legally enforceable in each jurisdiction out there, it still carries legal risk, in addition to the weight of force in jurisdictions where such provisions have actually been enforced.
This is very far from a state of minimal initiation of force. It’s a state of minimal initiation of force on the customer end, sure – customers have little financial power (both individually and in numbers, given how expensive organisation is), so in the absence of regulation they can leverage, they have no force to initiate. But companies have considerable resources of force at their disposal.
It’s not like there was heavy progress the last 10 years on smartphone hardware.
Since 2015 every smartphone is the same as the previous model, with a slightly better camera and a better chip. I don’t see how the regulation is making progress more difficult. IMHO it will drive innovation, phones will have to be made more durable.
And, for most consumers, the better camera is the only thing that they notice. An iPhone 8 is still massively overpowered for what a huge number of consumers need, and it was released five years ago. If anything, I think five years is far too short a time to demand support.
Until that user wants to play a mobile game–in which like PC hardware specs were propelled by gaming, so is the mobile market driven by games which I believe is now the most dominant gaming platform.
I don’t think the games are really that CPU / GPU intensive. It’s definitely the dominant gaming platform, but the best selling games are things like Candy Crush (which I admit to having spent far too much time playing). I just upgraded my 2015 iPad Pro and it was fine for all of the games that I tried from the app store (including the ones included with Netflix and a number of the top-ten ones). The only thing it struggled with was the Apple News app, which seems to want to preload vast numbers of articles and so ran out of memory (it had only 2 GiB - the iPhone version seems not to have this problem).
The iPhone 8 (five years old) has an SoC that’s two generations newer than my old iPad, has more than twice as much L2 cache, two high-performance cores that are faster than the two cores in mine (plus four energy-efficient cores, so games can have 100% use of the high-perf ones), and a much more powerful GPU (Apple in-house design replacing a licensed PowerVR one in my device). Anything that runs on my old iPad will barely warm up the CPU/GPU on an iPhone 8.
I don’t think the games are really that CPU / GPU intensive
But a lot are intensive & enthusiasts often prefer it. But still those time-waster types and e-sports tend to run on potatoes to grab the largest audience.
Anecdotally, I recently was reunited with my OnePlus 1 (2014) running Lineage OS, & it was choppy at just about everything (this was using the apps from when I last used it (2017) in airplane mode so not just contemporary bloat) especially loading map tiles on OSM. I tried Ubuntu Touch on it this year (2023) (listed as great support) & was still laggy enough that I’d prefer not to use it as it couldn’t handle maps well. But even if not performance bottle-necked, efficiency is certainly better (highly doubt it’d save more energy than the cost of just keeping an old device, but still).
My OnePlus 5T had an unfortunate encounter with a washing machine and tumble dryer, so now the cellular interface doesn’t work (everything else does). The 5T replaced a first-gen Moto G (which was working fine except that the external speaker didn’t work so I couldn’t hear it ring. I considered that a feature, but others disagreed). The Moto G was slow by the end. Drawing maps took a while, for example. The 5T was fine and I’d still be using it if I hadn’t thrown it in the wash. It has an 8-core CPU, 8 GiB of RAM, and an Adreno 540 GPU - that’s pretty good in comparison to the laptop that I was using until very recently.
I replaced the 5T with a 9 Pro. I honestly can’t tell the difference in performance for anything that I do. The 9 Pro is 4 years newer and doesn’t feel any faster for any of the apps or games that I run (and I used it a reasonable amount for work, with Teams, Word, and PowerPoint, which are not exactly light apps on any platform). Apparently the GPU is faster and the CPU has some faster cores but I rarely see anything that suggests that they’re heavily loaded.
Original comment mentioned iPhone 8 specifically. Android situation is completely different.
Apple had a significant performance lead for a while. Qualcomm just doesn’t seem to be interested in making high-end chips. They just keep promising that their next-year flagship will be almost as fast as Apple’s previous-year baseline. Additionally there are tons of budget Mediatek Androids that are awfully underpowered even when new.
Flagship Qualcomm chips for Android chips been fine for years & more than competitive once you factor in cost. I would doubt anyone is buying into either platform purely based on performance numbers anyhow versus ecosystem and/or wanting hardware options not offered by one or the other.
Those are some cherry-picked comparisons. Apple release on a different cadence. You check right now, & S23 beats up on it as do most flagship now. If you blur the timing, it’s all about the same.
With phones of the same tier released before & after you can see benchmarks are all close as is battery life. Features are wildly different tho since Android can offer a range of different hardware.
I think you’re really discounting the experiences of consumers to say they don’t notice the UI and UX changes made possible on the Android platform by improvements in hardware capabilities.
I notice that you’re not naming any. Elsewhere in the thread, I pointed out that I can’t tell the difference between a OnePlus 5T and a 9 Pro, in spite of them being years apart in releases. They can run the same version of Android and the UIs seem identical to me.
I didn’t think I had to. Android 9, 10, 11, 12 have distinct visual styles, and between vendors this distinction can further - this may be less apparent on OnePlus as they use their own OxygenOS (AOSP upstream ofc) (or at least, used to), but consumers notice even if they can’t clearly relate what they’ve noticed.
I’m using LimeageOS and both phones are running updated versions of the OS. Each version has made the settings app more awful but I can’t point to anything that’s a better UI or anything that requires newer hardware. Rendering the UI barely wakes up the GPU on the older phone. So what is new, better, and is enabled by newer hardware?
I can’t argue either way for “better”, I’m not the market. Newer hardware generally has better capability for graphics processing, leading to more reactive displays at higher refresh rates, and enabling compositing settings and features that otherwise wouldn’t run at an acceptable frame rate.
LineageOS is an AOSP build specifically designed to run fast and support legacy hardware, and is designed to look the same on all that hardware. It’s not a fair comparison to what people like to see with smartphone interfaces and launchers etc.
I can’t argue either way for “better”, I’m not the market. Newer hardware generally has better capability for graphics processing, leading to more reactive displays at higher refresh rates, and enabling compositing settings and features that otherwise wouldn’t run at an acceptable frame rate.
So please name one of them. A 2017 phone can happily run a 1080p display at a fast enough refresh that I’ve no idea what it is because it’s faster than my eyes can detect, with a full compositing UI. Mobile GPUs have been fast enough to composite every UI element from a separate texture, running complex pixel shaders on them, for ten years. OS X started doing this on laptops over 15 years ago, with integrated Intel graphics cards that are positively anaemic in comparison to anything in a vaguely recent phone. Android has provided a compositing UI toolkit from day one. Flutter, with its 60FPS default, runs very happily on a 2017 phone.
LineageOS is an AOSP build specifically designed to run fast and support legacy hardware, and is designed to look the same on all that hardware. It’s not a fair comparison to what people like to see with smartphone interfaces and launchers etc.
If it helps, I’m actually using the Microsoft launcher on both devices. But, again, you’re claiming that there are super magic UI features that are enabled by new hardware without saying what they are.
All innovation isn’t equal. Innovation that isn’t wanted by customers or their suppliers is malinvestment - a waste of human capacity, wealth, and time.
Innovation that isn’t wanted by customers or their suppliers is malinvestment - a waste of human capacity, wealth, and time.
What makes you think that this innovation is not wanted by customers?
There is innovation that is wanted by customers, but manufacturers don’t provide it because it goes against their interest. I think it’s a lie invisible-hand-believers tell themselves when claiming that customers have a choice between a fixable phone and a glued-phone with an appstore. Of course customers will chose the glued-phone with an app store, because they want a usable phone first. But this doesn’t mean they don’t want a fixable phone, it means that they were given a Hobson’s choice
but manufacturers don’t provide it because it goes against their interest.
The light-bulb cartel is the single worst example you could give; incandescent light-bulbs are dirt-cheap to replace and burning them hotter ends up improving the quality of their light (i.e. color) dramatically, while saving more in reduced power bills than they cost from shorter lifetimes. This 30min video by Technology Connections covers the point really well.
This cynical view is unwarranted in the case of EU, which so far is doing pretty well avoiding regulatory capture.
EU has a history of actually forcing companies to innovate in important areas that they themselves wouldn’t want to, like energy efficiency and ecological impact. And their regulations are generally set to start with realistic requirements, and are tightened gradually.
Not everything will sort itself out with consumers voting with their wallets. Sometimes degenerate behaviors (like vendor lock-in, planned obsolescence, DRM, spyware, bricking hardware when subscription for it expires) universally benefit companies, so all choices suck in one way or another. There are markets with high barriers to entry, especially in high-end electronics, and have rent-seeking incumbents that work for their shareholders’ interests, not consumers.
Ecodesign worked out wonderfully for vacuum cleaners, but that’s an appliance that hasn’t meaningfully changed since the 1930s. (You could argue that stick vacuum cleaners are different, but ecodesign certainly didn’t prevent them from entering the market)
The smartphone market has obviously been stagnating for a while, so it’ll be interesting to see if ecodesign can shake it up.
Ecodesign worked out wonderfully for vacuum cleaners, but that’s an appliance that hasn’t meaningfully changed since the 1930s
I strongly disagree here. They’ve changed massively since the ’90s. Walking around a vacuum cleaner shop in the ’90s, you had two choices of core designs. The vast majority had a bag that doubled as an air filter, pulling air through the bag and catching dust on the way. This is more or less the ’30s design (though those often had separate filters - there were quite a lot of refinements in the ’50s and ’60s - in the ’30s they were still selling ones that required a central compressor in the basement with pneumatic tubes that you plugged the vacuum cleaner into in each room).
Now, if you buy a vacuum cleaner, most of them use centrifugal airflow to precipitate heavy dust and hair, along with filters to catch the finer dust. Aside from the fact that both move air using electric motors, this is a totally different design to the ’30s models and to most of the early to mid ’90s models.
More recently, cheap and high-density lithium ion batteries have made cordless vacuums actually useful. These have been around since the ‘90s but they were pointless handheld things that barely functioned as a dustpan and brush replacement. Now they’re able to replace mains-powered ones for a lot of uses.
Oh, and that’s not even counting the various robot ones that can bounce around the floor unaided. These, ironically, are the ones whose vacuum-cleaner parts look the most like the ’30s design.
Just to add to that, the efficiency of most electrical home appliances has improved massively since the early ‘90s. With a few exceptions, like things based on resistive heating, which can’t improve much because of physics (but even some of those got replaced by devices with alternative heating methods) contemporary devices are a lot better in terms of energy efficiency. A lot of effort went into that, not only on the electrical end, but also on the mechanical end – vacuum cleaners today may look a lot like the ones in the 1930s but inside, from materials to filters, they’re very different. If you handed a contemporary vacuum cleaner to a service technician from the 1940s they wouldn’t know what to do with it.
Ironically enough, direct consumer demand has been a relatively modest driver of ecodesign, too – most consumers can’t and shouldn’t be expected to read power consumption graphs, the impact of one better device is spread across at least a two months’ worth of energy bills, and the impact of better electrical filtering trickles down onto consumers, so they’re not immediately aware of it. But they do know to look for energy classes or green markings or whatever.
But they do know to look for energy classes or green markings or whatever.
The eco labelling for white goods was one of the inspirations for this law because it’s worked amazingly well. When it was first introduced, most devices were in the B-C classification or worse. It turned out that these were a very good nudge for consumers and people were willing to pay noticeably more for higher-rated devices, to the point that it became impossible to sell anything with less than an A rating. They were forced to recalibrate the scheme a year or two ago because most things were A+ or A++ rated.
It turns out that markets work very well if customers have choice and sufficient information to make an informed choice. Once the labelling was in place, consumers were able to make an informed choice and there was an incentive for vendors to provide better quality on an axis that was now visible to consumers and so provided choice. The market did the rest.
Labeling works well when there’s a somewhat simple thing to measure to get the rating of each device - for a fridge it’s power consumption. It gets trickier when there’s no easy way to determine which of two devices is “better” - what would we measure to put a rating on a mobile phone or a computer?
I suppose the main problem is that such devices are multi-purpose - do I value battery life over FLOPS, screen brightness over resolution, etc. Perhaps there could be a multi-dimensional rating system (A for battery life, D for gaming performance, B for office work, …), but that gets unpractical very quickly.
There’s some research by Zinaida Benenson (I don’t have the publication to hand, I saw the pre-publication results) on an earlier proposal for this law that looked at adding two labels:
The number of years that the device would get security updates.
The maximum time between a vulnerability being disclosed and the device getting the update.
The proposal was that there would be statutory fines for devices that did not comply with the SLA outlined in those two labels but companies are free to put as much or as little as they wanted. Her research looked at this across a few consumer good classes and used the standard methodology where users were shown a small number of devices with different specs and different things on these labels and then asked to pick their preference. This was then used to vary price, features, and security SLA. I can’t remember the exact numbers but she found that users consistently were willing to select higher priced things with better security guarantees, and favoured them over some other features.
All the information I’ve read points to centrifugal filters not being meaningfully more efficient or effective than filter bags, which is why these centrifugal cylones are often backed up by traditional filters. Despite what James Dyson would have us believe, building vacuum cleaners is not like designing a Tokamak. I’d use them as an example of a meaningless change introduced to give consumers an incentive to upgrade devices that otherwise last decades.
Stick (cordless) vacuums are meaningfully different in that the key cleaning mechanism is no longer suction force. The rotating brush provides most of the cleaning action, coupled with a (relatively) weak suction provided by the cordless engines. This makes them vastly more energy-efficient, although this is probably cancelled out by the higher impact of production, and the wear and tear on the components.
It also might be a great opportunity for innovation in modular design. Say, Apple is always very proude when they come up with a new design. Remember a 15 min mini-doc on their processes when they introduced unibody macbooks? Or 10 min video bragging about their laminated screens?
I don’t see why it can’t be about how they designed a clever back cover that can be opened without tools to replace the battery and also waterproof. Or how they came up with a new super fancy screen glass that can survive 45 drops.
Depending on how you define “progress” there can be a plenty of opportunities to innovate. Moreover, with better repairability there are more opportunities for modding. Isn’t it a “progress” if you can replace one of the cameras on your iPhone Pro with, say, infrared camera? Definitely not a mainstream feature to ever come to mass-produced iPhone but maybe a useful feature for some professionals. With available schematics this might have a chance to actually come to market. There’s no chance for it to ever come to a glued solid rectangle that rejects any part but the very specific it came with from the factory.
That’s one way to think about it. Another is that shaping markets is one of the primary jobs of the government, and a representative government – which, for all its faults, the EU is – delegates this job to politics. And folks make a political decision on the balance of equities differently, and … well, they decide how the markets should look. I don’t think that “innovation” or “efficiency” at providing what the market currently provides is anything like a dispositive argument.
This overall shift will favor long term R&D investments of the kind placed before our last two decades of boom. It will improve innovation in the same way that making your kid eats vegetables improves their health. This is necessary soil for future booms.
Anyone have invites? This is intriguing to me, I’m hopeful about the broader move away from VC funding, at least for things that aren’t really capital intensive.
There’s a limited-invite thing on their subreddit about every week or so, and you can also email invites@tildes.net (don’t know the success rate with that one).
I am in the eu use joker.com and njalla and have no complaints.
I tried to use porkbun but for some reason I got flagged for fraud (normal credit card I use often, normal email, no VPN) they wanted a bunch of data including my passport to unlock my account so I just gave up. Actually at this point I gave up on us registrars and trying to find the cheapest and just decided I am willing to pay a bit more.
I am thankful for docker as well. But I am actually commenting here because you introduced me to Eyvind Earle’s paintings. The one you chose for that page is amazing, but he has others like “Green Hillside”. So good. Thank you for that.
Really cool idea. Would be nice if there was a repository of recipes in this format and people could share recipes. I’ve been using http://based.cooking and it’s really nice to skip the life story of a recipe and all the ads and popups to subscribe.
Anybody else thinks it’s a bit strange this is written in rust? also esbuild is written in go, I think. I mean.. should that not be kind of a red flag for javascript?
I mean Lua is written in C. CPython is written in C. Ruby… the list goes on.
I heavily embrace writing compilers and parsing tools in languages that can let you eke out performance and are probably better suited for the task!
Javascript has loads of advantages, but “every attribute access is a hash table lookup (until the JIT maybe fixes it for you)” seems like a silly cost to pay if you can avoid it for the thing that you really really really want to be fast.
Honestly I think the “write stuff in the language it is meant for” has cost so much dev time in the long term. It’s definitely valuable for prototyping and early versions, but for example templating language rendering not being written in tooling that tries to minimize allocations and other things means that everything is probably 3-5x slower than it could be. I get a bunch of hits on a blog post because people just search “pylint slow” in Google so much.
Meanwhile we have gotten many languages that are somewhat ergonomic but performant (Rust, to some extend Go or Java as well. Modern C++, kinda?). Obviously Pylint being Python means recruiting contributors is “easier” in theory, but in practice there’s a certain kind of person who works on these tools in general, and they’re probably not too uncomfortable with using many langauges IMO.
to me it’s a bit strange that javascript is such an old language but it’s either not so optimized/fast enough for people use it to build/compile javascript, or people don’t want to use it even though they are building a build tool for it?
Javascript has been going through a lot of optimizations, feature additions etc but it’s still a high level interpreted language. Compiling Javascript itself (or anything else) efficiently isn’t really among the primary goals of the language. I see no problem at opting for compiled languages optimized for performance for such a task. This does not diminish the language in my eyes because this is not what the language is designed for. Javascript may be terrible for a lot of reasons but this is not really a red flag for me.
Age of the language doesn’t make it fast or slow, neither make it more or less optimised. Rust and Go will be faster than JS mostly, because these do not need to have support for many of the JS features, that make it hard to compile to native code - runtime reflection on anything for example.
Producing code that executes quickly is just one of a large number of possible goals for a language. It is especially a goal for rust, but it’s not a priority for javascript.
This is perfectly normal. Languages that do prioritise speed of execution of the produced code make other trade-offs that would not be palatable to javascript use cases. The fact that someone is writing a bundler for javascript in another language is more a sign of the value that javascript brings, even to people familiar with other technologies.
JS is pretty fast when you stay on the optimal path, which is very tricky to do for anything bigger than a single hot loop. You usually need to depend on implementation details of JIT engines, and shape your code to their preference, which often results in convoluted code (e.g. hit a sweet spot of JIT’s inlining and code caching heuristics, avoid confusing escape analysis, don’t use objects that create GC pressure).
OTOH Rust is compiled ahead of time, and its language constructs are explicitly designed to be low-level and optimize in a predictable way. You don’t fight heuristics of GC and JIT optimization tiers. It’s easy to use stack instead of the heap, control inlining, indirections, dynamic dispatch. There’s tooling to inspect compiled code, so you can have a definitive answer whether some code is efficient or not.
Rust is a more fitting tool for high-performance parsing.
Not really. Idiomatic JS code for this kind of workload will generate a lot of junk for the collector, and there’s also the matter of binding to various differing native APIs for file monitoring.
Had totally forgotten I had pcre-mode enabled, but it’s part of pcre2el. pcre-mode is marked experimental (advises a handful of functions). I’ve not yet run into issues, though I don’t typically need more than basic regexps. There are alternatives in there (ie. use pcre-query-replace-regexp) to avoid advices.
It would help if Firefox would actually make a better product that’s not a crappy Chrome clone. The “you need to do something different because [abstract ethical reason X]” doesn’t work with veganism, it doesn’t work with chocolate sourced from dubious sources, it doesn’t work with sweatshop-based clothing, doesn’t work with Free Software, and it sure as hell isn’t going to work here. Okay, some people are going to do it, but not at scale.
Sometimes I think that Mozilla has been infiltrated by Google people to sabotage it. I have no evidence for this, but observed events don’t contradict it either.
It would help if Firefox would actually make a better product that’s not a crappy Chrome clone. The “you need to do something different because [abstract ethical reason X]” doesn’t work with veganism, it doesn’t work with chocolate sourced from dubious sources, it doesn’t work with sweatshop-based clothing, doesn’t work with Free Software, and it sure as hell isn’t going to work here. Okay, some people are going to do it, but not at scale.
I agree, but the deck is stacked against Mozilla. They are a relatively small nonprofit largely funded by Google. Structurally, there is no way they can make a product that competes. The problem is simply that there is no institutional counterweight to big tech right now, and the only real solutions are political: antitrust, regulation, maybe creating a publicly-funded institution with a charter to steward the internet in the way Mozilla was supposed to. There’s no solution to the problem merely through better organizational decisions or product design.
I don’t really agree; there’s a lot of stuff they could be doing better, like not pushing out updates that change the colour scheme in such a way that it becomes nigh-impossible to see which tab is active. I don’t really care about “how it looks”, but this is just objectively bad. Maybe if you have some 16k super-HD IPS screen with perfect colour reproduction at full brightness in good office conditions it’s fine, but I just have a shitty ThinkPad screen and the sun in my home half the time (you know, like a normal person). It’s darn near invisible for me, and I have near-perfect eyesight (which not everyone has). I spent some time downgrading Firefox to 88 yesterday just for this – which it also doesn’t easily allow, not if you want to keep your profile anyway – because I couldn’t be arsed to muck about with userChrome.css hacks. Why can’t I just change themes? Or why isn’t there just a setting to change the colour?
There’s loads of other things; one small thing I like to do is not have a “x” on tabs to close it. I keep clicking it by accident because I have the motor skills of a 6 year old and it’s rather annoying to keep accidentally closing tabs. It used to be a setting, then it was about:config, then it was a userChrome.css hack, now it’s a userChrome.css hack that you need to explicitly enable in about:config for it to take effect, and in the future I probably need to sacrifice a goat to our Mozilla overlords if I want to change it.
I also keep accidentally bookmarking stuff. I press ^D to close terminal windows and sometimes Firefox is focused and oops, new bookmark for you! Want to configure keybinds for Firefox? Firefox say no; you’re not allowed, mere mortal end user; our keybinds are perfect and work for everyone, there must be something wrong with you if you don’t like it! It’s pretty darn hard to hack around this too – more time than I was willing to spend on it anyway – so I just accepted this annoyance as part of my life 🤷
“But metrics show only 1% of people use this!” Yeah, maybe; but 1% here and 5% there and 2% somewhere else and before you know it you’ve annoyed half (of not more) of your userbase with a bunch of stuff like that. It’s the difference between software that’s tolerable and software that’s a joy to use. Firefox is tolerable, but not a joy. I’m also fairly sure metrics are biased as especially many power users disable it, so while useful, blindly trusting it is probably not a good idea (I keep it enabled for this reason, to give some “power user” feedback too).
Hell, I’m not even a “power user” really; I have maybe 10 tabs open at the most, usually much less (3 right now) and most settings are just the defaults because I don’t really want to spend time mucking about with stuff. I just happen to be a programmer with an interest in UX who cares about a healthy web and knows none of this is hard, just a choice they made.
These are all really simple things; not rocket science. As I mentioned a few days ago, Firefox seems have fallen victim to a mistaken and fallacious mindset in their design.
Currently Firefox sits in a weird limbo that satisfies no one: “power users” (which are not necessarily programmers and the like, loads of people with other jobs interested in computers and/or use computers many hours every day) are annoyed with Firefox because they keep taking away capabilities, and “simple” users are annoyed because quite frankly, Chrome gives a better experience in many ways (this, I do agree, is not an easy problem to solve, but it does work “good enough” for most). And hey, even “simple” users occasionally want to do “difficult” things like change something that doesn’t work well for them.
So sure, while there are some difficult challenges Firefox faces in competing against Google, a lot of it is just simple every-day stuff where they just choose to make what I consider to be a very mediocre product with no real distinguishing features at best. Firefox has an opportunity to differentiate themselves from Chrome by saying “yeah, maybe it’s a bit slower – it’s hard and we’re working on that – but in the meanwhile here’s all this cool stuff you can do with Firefox that you can’t with Chrome!” I don’t think Firefox will ever truly “catch up” to Chrome, and that’s fine, but I do think they can capture and retain a healthy 15%-20% (if not more) with a vision that consists of more than “Chrome is popular, therefore, we need to copy Chrome” and “use us because we’re not Chrome!”
Speaking of key bindings, Ctrl + Q is still “quit without any confirmation”. Someone filed a bug requesting this was changeable (not even default changed), that bug is now 20 years old.
It strikes me that this would be a great first issue for a new contributor, except the reason it’s been unfixed for so long is presumably that they don’t want it fixed.
A shortcut to quit isn’t a problem, losing user data when you quit is a problem. Safari has this behaviour too, and I quite often hit command-Q and accidentally quit Safari instead of the thing I thought I was quitting (since someone on the OS X 10.8 team decided that the big visual clues differentiating the active window and others was too ugly and removed it). It doesn’t bother me, because when I restart Safari I get back the same windows, in the same positions, with the same tabs, scrolled to the same position, with the same unsaved form data.
I haven’t used Firefox for a while, so I don’t know what happens with Firefox, but if it isn’t in the same position then that’s probably the big thing to fix, since it also impacts experience across any other kind of browser restart (OS reboots, crashes, security updates). If accidentally quitting the browser loses you 5-10 seconds of time, it’s not a problem. If it loses you a load of data then it’s really annoying.
Firefox does this when closing tabs (restoring closed tabs usually restores form content etc.) but not when closing the window.
The weird thing is that it does actually have a setting to confirm when quitting, it’s just that it only triggers when you have multiple tabs or windows open and not when there’s just one tab 🤷
The weird thing is that it does actually have a setting to confirm when quitting, it’s just that it only triggers when you have multiple tabs or windows open and not when there’s just one tab
Does changing browser.tabs.closeWindowWithLastTab in about:config fix that?
I have it set to false already, I tested it to make sure and it doesn’t make a difference (^W won’t close the tab, as expected, but ^Q with one tab will still just quit).
I quite often hit command-Q and accidentally quit Safari
One of the first things I do when setting up a new macOS user for myself is adding alt-command-Q in Preferences → Keyboard → Shortcuts → App Shortcuts for “Quit Safari” in Safari. Saves my sanity every day.
You can do this in windows for firefox (or any browser) too with an autohotkey script. You can set it up to catch and handle a keypress combination before it reaches any other application. This will be global of course and will disable and ctrl-q hotkey in all your applications, but if you want to get into detail and write a more complex script you can actually check which application has focus and only block the combination for the browser.
This sounds like something Chrome gets right - if I hit CMD + Q I get a prompt saying “Hold CMD+Q to Quit” which has prevented me from accidentally quitting lots of times. I assumed this was MacOS behaviour, but I just tested Safari and it quit immediately.
Speaking of key bindings, Ctrl + Q is still “quit without any confirmation”.
That was fixed a long time ago, at least on Linux. When I press it, a modal says “You are about to close 5 windows with 24 tabs. Tabs in non-private windows will be restored when you restart.” ESC cancels.
I had that problem for a while but it went away. I have browser.quitShortcut.disabled as false in about:config. I’m not sure if it’s a default setting or not.
It seems that this defaults to false. The fact you have it false, but don’t experience the problem, is counter-intuitive to me. Anyway the other poster’s suggestion was to flip this, so I’ll try that. Thanks!
That does seem backwards. Something else must be overriding it. I’m using Ubuntu 20.04, if that matters. I just found an online answer that mentions the setting.
On one level, I disagree – I have zero problems with Firefox. My only complaint is that sometimes website that are built to be Chrome-only don’t work sometimes, which isn’t really Firefox’s problem, but the ecosystem’s problem (see my comment above about antitrust, etc). But I will grant you that Firefox’s UX could be better, that there are ways the browser could be improved in general. However, I disagree here:
retain a healthy 15%-20% (if not more)
I don’t think this is possible given the amount of resources Firefox has. No matter how much they improve Firefox, there are two things that are beyond their control:
Most users use Google products (gmail, calendar, etc), and without an antitrust case, these features will be seamlessly integrated into Chrome, and not Firefox.
Increasingly, websites are simple not targeting Firefox for support, so normal users who want to say, access online banking, are SOL on Firefox. (This happens to me, I still have to use Chrome for some websites)
Even the best product managers and engineers could not reverse Firefox’s design. We need a political solution, unless we want the web to become Google Web (tm).
Hm, last time I tried this it didn’t do much of anything other than change the colour of the toolbar to something else or a background picture; but maybe it’s improved now. I’ll have a look next time I try mucking about with 89 again; thanks!
I agree with Firefox’s approach of choosing mainstream users over power-users - that’s the only way they’ll ever have 10% or more of users. Firefox is doing things with theming that I wish other systems would do - they have full “fresco” themes (images?) in their chrome! It looks awesome! I dream about entire DEs and app suites built from the ground up with the same theme of frescoes (but with an different specific fresco for each specific app, perhaps tailored to that app). Super cool!
I don’t like the lack of contrast on the current tab, but “give users the choice to fix this very specific issue or not” tends to be extremely shortsighted - the way to fix it is to fix it. Making it optional means yet another maintenance point on an already underfunded system, and doesn’t necessarily even fix the problem for most users!
More importantly, making ultra-specific optionss like that is usually pushing decisions onto the user as a method of avoiding internal politicking/arguments, and not because pushing to the user is the optimal solution for that specific design aspect.
As for the close button, I am like you. You can set browser.tabs.tabClipWidth to 1000. Dunno if it is scheduled to be removed.
As for most of the other grips, adding options and features to cater for the needs of a small portion of users has a maintenance cost. Maybe adding the option is only one line, but then a new feature needs to work with the option enabled and disabled. Removing options is just a way to keep the code lean.
My favorite example in the distribution world is Debian. Debian supports tries to be the universal OS. We are drowning with having to support everything. For examples, supporting many init systems is more work. People will get to you if there is a bug in the init system you don’t use. You spend time on this. At the end, people not liking systemd are still unhappy and switch to Devuan which supports less init systems. I respect Mozilla to keep a tight ship and maintaining only the features they can support.
Nobody would say anything if their strategy worked. The core issue is that their strategy obviously doesn’t work.
adding options and features to cater for the needs of a small portion of users
It ’s not even about that.
It’s removing things that worked and users liked by pretending that their preferences are invalid. (And every user belongs to some minority that likes a feature others may be unaware of.)
See the recent debacle of gradually blowing up UI sizes, while removing options to keep them as they were previously.
Somehow the saved cost to support some feature doesn’t seem to free up enough resources to build other things that entice users to stay.
All they do with their condescending arrogance on what their perfectly spherical idea of a standard Firefox user needs … is making people’s lives miserable.
They fired most of the people that worked on things I was excited about, and it seems all that’s left are some PR managers and completely out-of-touch UX “experts”.
As for most of the other grips, adding options and features to cater for the needs of a small portion of users has a maintenance cost. Maybe adding the option is only one line, but then a new feature needs to work with the option enabled and disabled. Removing options is just a way to keep the code lean.
It seems to me that having useful features is more important than having “lean code”, especially if this “lean code” is frustrating your users and making them leave.
I know it’s easy to shout stuff from the sidelines, and I’m also aware that there may be complexities I may not be aware of and that I’m mostly ignorant of the exact reasoning behind many decisions (most of us here are really, although I’ve seen a few Mozilla people around), but what I do know is that 1) Firefox as a product has been moving in a certain direction for years, 2) that Firefox has been losing users for years, 3) that I know few people who truly find Firefox an amazing browser that a joy to use, and that in light of that 4) keep doing the same thing you’ve been doing for years is probably not a good idea, and 5) that doing the same thing but doing it harder is probably an even worse idea.
I also don’t think that much of this stuff is all that much effort. I am not intimately familiar with the Firefox codebase, but how can a bunch of settings add an insurmountable maintenance burden? These are not “deep” things that reach in to the Gecko engine, just comparatively basic UI stuff. There are tons of projects with a much more complex UI and many more settings.
Hell, I’d argue that even removing the RSS was also a mistake – they should have improved it instead, especially after Google Reader’s demise there was a huge missed opportunity there – although it’s a maintenance burden trade-off I can understand it better, it also demonstrates a lack of vision to just say “oh, it’s old crufty code, not used by many (not a surprise, it sucked), so let’s just remove it, people can just install an add-on if they really want it”. This is also a contradiction with Firefox’s mantra of “most people use the defaults, and if it’s not used a lot we can just remove it”. Well, if that’s true then you can ship a browser with hardly any features at all, and since most people will use the defaults they will use a browser without any features.
Browsers like Brave and Vivaldi manage to do much of this; Vivaldi has an entire full-blown email client. I’d wager that a significant portion of the people leaving Firefox are actually switching to those browsers, not Chrome as such (but they don’t show up well in stats as they identify as “Chrome”). Mozilla nets $430 million/year; it’s not a true “giant” like Google or Apple, but it’s not small either. Vivaldi has just 55 employees (2021, 35 in 2017); granted, they do less than Mozilla, but it doesn’t require a huge team to do all of this.
And every company has limited resources; it’s not like the Chrome team is a bottomless pit of resources either. A number of people in this thread express the “big Google vs. small non-profit Mozilla”-sentiment here, but it doesn’t seem that clear-cut. I can’t readily find a size for the Chrome team on the ‘net, but I checked out the Chromium source code and let some scripts loose on that: there are ~460 Google people with non-trivial commits in 2020, although quite a bit seems to be for ChromeOS and not the browser part strictly speaking, so my guestimate is more 300 people. A large team? Absolutely. But Mozilla’s $430/million a year can match this with ~$1.5m/year per developer. My last company had ~70 devs on much less revenue (~€10m/year). Basically they have the money to spare to match the Chrome dev team person-for-person. Mozilla does more than just Firefox, but they can still afford to let a lot of devs loose on Gecko/Firefox (I didn’t count the number devs for it, as I got some other stuff I want to do this evening as well).
It’s all a matter of strategy; history is littered with large or even huge companies that went belly up just because they made products that didn’t fit people’s demands. I fear Firefox will be in the same category. Not today or tomorrow, but in five years? I’m not so sure Firefox will still be around to be honest. I hope I’m wrong.
As for your Debian comparison; an init system is a fundamental part of the system; it would be analogous to Firefox supporting different rendering or JS engines. It’s not even close to the same as “an UI to configure key mappings” or “a bunch of settings for stuff you can actually already kind-of do but with hacks that you need to explicitly search for and most users don’t know it exists”, or even a “built-in RSS reader that’s really good and a great replacement for Google Reader”.
I agree with most of what you said. Notably the removal of RSS support. I don’t work for Mozilla and I am not a contributor, so I really can’t answer any of your questions.
Another example of maintaining a feature would be Alsa support. It has been removed, this upsets some users, but for me, this is understandable as they don’t want to handle bug reports around this or the code to get in the way of some other features or refactors. Of course, I use Pulseaudio, so I am quite biased.
I think ALSA is a bad example; just use Pulseaudio. It’s long since been the standard, everyone uses it, and this really is an example of “147 people who insist on having an überminimal Linux on Reddit being angry”. It’s the kind of technical detail with no real user-visible changes that almost no one cares about. Lots of effort with basically zero or extremely minimal tangible benefits.
And ALSA is a not even a good or easy API to start with. I’m pretty sure that the “ALSA purists” never actually tried to write any ALSA code otherwise they wouldn’t be ALSA purists but ALSA haters, as I’m confident there is not a single person that has programmed with ALSA that is not an ALSA hater to some degree.
Pulseaudio was pretty buggy for a while, and its developer’s attitude surrounding some of this didn’t really help, because clearly if tons of people are having issues then all those people are just “doing it wrong” and is certainly not a reason to fix anything, right? There was a time that I had a keybind to pkill pulseaudio && pulseaudio --start because the damn thing just stopped working so often. The Grand Pulseaudio Rollout was messy, buggy, broke a lot of stuff, and absolutely could have been handled better. But all of that was over a decade ago, and it does actually provide value. Most bugs have been fixed years ago, Poettering hasn’t been significantly involved since 2012, yet … people still hold an irrational hatred towards it 🤷
ALSA sucks, but PulseAudio is so much worse. It still doesn’t even actually work outside the bare basics. Firefox forced me to put PA on and since then, my mic randomly spews noise and sound between programs running as different user ids is just awful. (I temporarily had that working better though some config changes, then a PA update - hoping to fix the mic bug - broke this… and didn’t fix the mic bug…)
I don’t understand why any program would use the PA api instead of the alsa ones. All my alsa programs (including several I’ve made my own btw, I love it whenever some internet commentator insists I don’t exist) work equally as well as pulse programs on the PA system… but also work fine on systems where audio actually works well (aka alsa systems). Using the pulse api seems to be nothing but negatives.
There’s also the fact that web browsers are simply too big to reimplement at this point. The best Mozilla can do (barely) is try to keep up with the Google-controlled Web Platform specs, and try to collude with Apple to keep the worst of the worst from being formally standardized (though Chrome will implement them anyway). Their ability to do even that was severely impacted by their layoffs last year. At some point, Apple is going to fold and rebase Safari on Chromium, because maintaining their own browser engine is too unprofitable.
At this point, we need to admit that the web belongs to Google, and use it only to render unto Google what is Google’s. Our own traffic should be on other protocols.
Product design can’t fix any of these problems because nobody is paying for the product. The more successful it is, the more it costs Mozilla. The only way to pay the rent with free-product-volume is adtech, which means spam and spying.
I don’t agree this is a vague ethical reason. Problem with those are concerns like deforestation (and destruction of habitats for smaller animals) to ship almond milk across the globe, and sewing as an alternative to poverty and prostitution, etc.
The browser privacy question is very quantifiable and concrete, the source is in the code, making it a concrete ethical-or-such choice.
ISTR there even being a study or two where people were asked about willingness to being spied upon, people who had no idea their phones were doing what was asked about, and being disconcerted after the fact. That’s also a concrete way to raise awareness.
At the end of the day none of this may matter if people sign away their rights willingly in favor of a “better” search-result filter bubble.
I don’t think they’re vague (not the word I used) but rather abstract; maybe that’s no the best word either but what I mean with it is that it’s a “far from my bed show” as we would say in Dutch. Doing $something_better on these topics has zero or very few immediate tangible benefits, but rather more abstract long-term benefits. And in addition it’s also really hard to feel that you’re really making a difference as a single individual. I agree with you that these are important topics, it’s just that this type of argument is simply not all that effective at really making a meaningful impact. Perhaps it should be, but it’s not, and exactly because it’s important we need to be pragmatic about the best strategy.
And if you’re given the choice between “cheaper (or better) option X” vs. “more expensive (or inferior) option Y with abstract benefits but no immediate ones”, then I can’t really blame everyone for choosing X either. Life is short, lots of stuff that’s important, and can’t expect everyone to always go out of their way to “do the right thing”, if you can even figure out what the “right thing” is (which is not always easy or black/white).
I think we agree that the reasoning in these is inoptimal either way.
Personally I wish these articles weren’t so academic, and maybe not in somewhat niche media, but instead mainstream publications would run “Studies show people do not like to be spied upon yet they are - see the shocking results” clickbaity stuff.
It probably wasn’t super-clear what exactly was intended with that in the first place so easy enough of a mistake to make 😅
As for articles, I’ve seen a bunch of them in mainstream Dutch newspapers in the last two years or so; so there is some amount of attention being given to this. But as I expended on in my other lengthier comment, I think the first step really ought to be making a better product. Not only is this by far the easiest to do and within our (the community’s) power to do, I strongly suspect it may actually be enough, or at least go a long way.
It’s like investing in public transport is better than shaming people for having a car, or affordable meat alternatives is a better alternative than shaming people for eating meat, etc.
I agree to an extent. Firefox would do well to focus on the user experience front.
I switched to Firefox way back in the day, not because of vague concerns about the Microsoft hegemony, or even concerns about web standards and how well each browser implemented them. I switched because they introduced the absolutely groundbreaking feature that is tabbed browsing, which gave a strictly better user experience.
I later switched to Chrome when it became obvious that it was beating Firefox in terms of performance, which is also a factor in user experience.
What about these days? Firefox has mostly caught up to Chrome on the performance point. But you know what’s been the best user experience improvement I’ve seen lately? Chrome’s tab groups feature. It’s a really simple idea, but it’s significantly improved the way I manage my browser, given that I tend to have a huge number of tabs open.
These are the kinds of improvements that I’d like to see Firefox creating, in order to lure people back. You can’t guilt me into trying a new browser, you have to tempt me.
But you know what’s been the best user experience improvement I’ve seen lately? Chrome’s tab groups feature. It’s a really simple idea, but it’s significantly improved the way I manage my browser, given that I tend to have a huge number of tabs open.
Opera had this over ten years ago (“tab stacking”, added in Opera 11 in 2010). Pretty useful indeed, even with just a limited number of tabs. It even worked better than Chrome groups IMO. Firefox almost-kind-of has this with container tabs, which are a nice feature actually (even though I don’t use it myself), and with a few UX enhancements on that you’ve got tab groups/stacking.
Opera also introduced tabbed browsing by the way (in 2000 with Opera 4, about two years before Mozilla added it in Phoenix, which later became Firefox). Opera was consistently way ahead of the curve on a lot of things. A big reason it never took off was because for a long time you had to pay for it (until 2005), and after that it suffered from “oh, I don’t want to pay for it”-reputation for years. It also suffered from sites not working; this often (not always) wasn’t even Opera’s fault as frequently this was just a stupid pointless “check” on the website’s part, but those were popular in those days to tell people to not use IE6 and many of them were poor and would either outright block Opera or display a scary message. And being a closed-source proprietary product also meant it never got the love from the FS/OSS crowd and the inertia that gives (not necessarily a huge inertia, but still).
So Firefox took the world by storm in the IE6 days because it was free and clearly much better than IE6, and when Opera finally made it free years later it was too late to catch up. I suppose the lesson here is that “a good product” isn’t everything or a guarantee for success, otherwise we’d all be using Opera (Presto) now, but it certainly makes it a hell of a lot easier to achieve success.
Opera had a lot of great stuff. I miss Opera 😢 Vivaldi is close (and built by former Opera devs) but for some reason it’s always pretty slow on my system.
This is fair and I did remember Opera being ahead of the curve on some things. I don’t remember why I didn’t use it, but it being paid is probably why.
I agree, I loved the Presto-era Opera and I still use the Blink version as my main browser (and Opera Mobile on Android). It’s still much better than Chrome UX-wise.
I haven’t used tab groups, but it looks pretty similar to Firefox Containers which was introduced ~4 years ahead of that blog post. I’ll grant that the Chrome version is built-in and looks much more polished and general purpose than the container extension, so the example is still valid.
I just wanted to bring this up because I see many accusations of Firefox copying Chrome, but I never see the reverse being called out. I think that’s partly because Chrome has the resources to take Mozilla’s ideas and beat them to market on it.
One challenge for people making this kind of argument is that predictions of online-privacy doom and danger often don’t match people’s lived experiences. I’ve been using Google’s sites and products for over 20 years and have yet to observe any real harm coming to me as a result of Google tracking me. I think my experience is typical: it is an occasional minor annoyance to see repetitive ads for something I just bought, and… that’s about the extent of it.
A lot of privacy advocacy seems to assume that readers/listeners believe it’s an inherently harmful thing for a company to have information about them in a database somewhere. I believe privacy advocates generally believe that, but if they want people to listen to arguments that use that assumption as a starting point, they need to do a much better job offering non-circular arguments about why it’s bad.
I think it has been a mistake to focus on loss of privacy as the primary data collection harm. To me the bigger issue is that it gives data collectors power over the creators of the data and society as a whole, and drives destabilizing trends like political polarization and economic inequality. In some ways this is a harder sell because people are brainwashed to care only about issues that affect them personally and to respond with individualized acts.
In some ways this is a harder sell because people are brainwashed to care only about issues that affect them personally and to respond with individualized acts.
I’m not @halfmanhalfdonut but I don’t think that brainwashing is needed to get humans to behave like this. This is just how humans behave.
things like individualism, solidarity, and collaboration exist on a spectrum, and everybody exhibits each to some degree. so saying humans just are individualistic is tautological, meaningless. everyone has some individualism in them regardless of their upbringing, and that doesn’t contradict anything in my original comment. that’s why I asked if there was some disagreement.
to really spell it out, modern mass media and culture condition people to be more individualistic than they otherwise would be. that makes it harder to make an appeal to solidarity and collaboration.
to really spell it out, modern mass media and culture condition people to be more individualistic than they otherwise would be. that makes it harder to make an appeal to solidarity and collaboration.
I think we’re going to have to agree to disagree. I can make a complicated rebuttal here, but it’s off-topic for the site, so cheers!
I think you’re only seeing the negative side (to you) of modern mass media and culture. Our media and culture also promote unity, tolerance, respect, acceptance, etc. You’re ignoring that so that you can complain about Google influencing media, but the reality is that the way you are comes from those same systems of conditioning.
The fact that you even know anything about income inequality and political polarization are entirely FROM the media. People on the whole are not as politically divided as media has you believe.
I agree with everything you’ve written in this thread, especially when it comes to the abstractness of pro-Firefox arguments as of late. Judging from the votes it seems I am not alone. It is sad to see Mozilla lose the favor of what used to be its biggest proponents, the “power” users. I truly believe they are digging their own grave – faster and faster it seems, too. It’s unbelievable how little they seem to be able to just back down and admit they were wrong about an idea, if only for a single time.
Firefox does have many features that Chrome doesn’t have: container tabs, tree style tabs, better privacy and ad-blocking capabilities, some useful dev tools that I don’t think Chrome has (multi-line JS and CSS editors, fonts), isolated profiles, better control over the home screen, reader mode, userChrome.css, etc.
This really shows off the capabilities of the language: both the backend and frontend are written in Nim. The frontend is Nim compiled down to a JavaScript SPA.
I’m the kind of person who is sensitive to latency; I dislike most JS-heavy browser-based things. The Nim Forum is as responsive as a JS-free / minimal JS site.
Not sure what counts as “serious in production”, but I’ve been running a kernel-syslogd on four or so machines ever since I wrote kslog. I also have several dozen command-line utilities including replacements for ls and procps as well as a unified diff highlighter. The Status IM Ethereum project has also been investing heavily in Nim.
I’ve been working on a key-value data store; it’s a wrapper around the C library libmdbx (an extension of LMDB), but with an idiomatic Nim API, and I’m working on higher level features like indexes.
I can not use FISH shell as it does not support such basic POSIX for loops as these:
% for LOG in ls *.log; do tail -5 ${LOG}; done
Its pointless to learn another new FISH syntax just for this …
If it would support the I could try it and maybe switch from ZSH … but once you setup your ZSH shell there is no point in using any other shell then ZSH …
But what’s the advantage of the different syntax? Most people considering Fish will already be familiar with POSIX for loops. and will still be writing POSIX for loops for both shell scripts and for interactive shells on other systems. Is the extra “do” so annoying that it’s worth the extra overhead of constantly switching between the different shell for loop syntaxes?
This is literally the main reason why I’m not using fish. I appreciate the good out-of-the-box configuration. I’m painfully familiar with ZSH’s fragility; it even made me switch back to bash. I would love a good, modern, pretty, nice-out-of-the-box shell. I just don’t want to use a non-POSIX shell. When I’m just writing pipelines and for loops interactively, POSIX shell’s issues aren’t really relevant When I’m writing anything complex enough for POSIX shell’s issues to be relevant, it’s in a shell script in a file, and I don’t want all my shell scripts to use some weird non-standard syntax which will preclude me from switching to a different shell in the future. So fish’s “improved” syntax is a disadvantage for interactive use, and isn’t something I would use for non-interactive use anyways.
Also, the official documentation tells you to run chsh -s <path to fish> to switch to Fish. Well, large parts of UNIX systems expect $SHELL to be POSIX-compatible. If you follow the official documentation your Sway configuration will break, all Makefiles will break, your sxhkd config will break, and lots of other programs will break. If it’s going to recommend switching to Fish with chsh, it really should be POSIX compatible.
IMO, fish is an amazing shell made less relevant through insistence on having a syntax which doesn’t even remotely resemble POSIX shell syntax.
Also, the official documentation tells you to run chsh -s to switch to Fish. Well, large parts of UNIX systems expect $SHELL to be POSIX-compatible. If you follow the official documentation your Sway configuration will break, all Makefiles will break, your sxhkd config will break, and lots of other programs will break. If it’s going to recommend switching to Fish with chsh, it really should be POSIX compatible.
This is simply not true. I use fish as my default shell and never had a problem with makefiles or my window manager, I use i3, but don’t know why sway would be different. I never encountered software that just runs scripts like that, they either have a #! line that specifies the shell or just call bash -c/sh -c directly or whatever.
IMO lack of Posix compliance in fish is a non issue, specially for experienced users which will know how to fallback to bash or write scripts and use #!. I used zsh for a long time and I’d use bash for scripts I could share with my team and everything just works. I feel like I could have switched to fish a lot sooner if I just tried instead of being put off by comments like these. If you are curious, just try it, maybe it’s for you, maybe it’s not, but don’t rely on other people’s opinion.
For me the biggest advantage of fish is I can easily understand my config. With zsh I had a bunch of plugins and configs to customize it and I understand almost none of it. Every time I wanted to change it I would lose a lot of time. Documentation was also a pain, searching for obscure features you had to read random forums and advice and try different things. fish has a great manual, and great man pages, everything is just easy to learn and lookup. I value that more than I value POSIX compliance, maybe you don’t, but form your own opinion.
This is simply not true. I use fish as my default shell and never had a problem with makefiles or my window manager
I’m happy that you haven’t experienced issues. I know 100% for a fact that having a non-POSIX $SHELL was causing a lot of issues for me last time I tried using fish. Maybe you’ve been lucky, or maybe they have a workaround now.
For me the biggest advantage of fish is I can easily understand my config. With zsh I had a bunch of plugins and configs to customize it and I understand almost none of it.
That’s fine. I agree that the things you mention are advantages of fish. I was wondering what the advantage of a different syntax is. Like, in which ways would fish but with a POSIX syntax be worse than the current implementation of fish? In which situations is it an advantage to have a POSIX-incompatible syntax?
There is the right and the wrong way to switch to fish: The right way is to set the login shell for a user (i.e. replace /bin/bash with /usr/bin/fish in /etc/passwd, either manually or with chsh).
The wrong way is to point /bin/sh at /usr/bin/fish: That, and only that symlink, is what matters to everything that implicitly invokes “the shell” (i.e. /bin/sh) without a hashbang, such as Makefiles. I’m not surprised at the carnage you described if you did this.
I, too, used fish for a while and did observe breakage, and I for sure did not do anything as silly as that. I remember in particular this bug: https://github.com/fish-shell/fish-shell/issues/2292 After that I changed my shell to bash and did exec fish in .bashrc. This, IIRC, did fix most of the bugs, though I still had to be careful: some scripts don’t actually use a shebang and expect the shell to notice that the executable is a shell script.
For example:
$ echo echo 5 > x.sh
$ chmod +x x.sh
Then, from bash:
$ ./x.sh
5
And from fish:
Failed to execute process './x.sh'. Reason:
exec: Exec format error
The file './x.sh' is marked as an executable but could not be run by the operating system.
That’s fine. I agree that the things you mention are advantages of fish. I was wondering what the advantage of a different syntax is. Like, in which ways would fish but with a POSIX syntax be worse than the current implementation of fish? In which situations is it an advantage to have a POSIX-incompatible syntax?
That I don’t know. It’s possible that fish would be better if it was POSIX compatible, I was just saying that even though it is not POSIX compatible, it’s still worth using. I think fish syntax is better for interactive use than bash/zsh, but that is just my opinion. For script use I use bash anyway. One exception is when writing custom completions for my custom commands and then I am oh so grateful I am not using bash/zsh and not using that syntax.
I have tons of scripts but I also use these for and while POSIX loops all the time … and putting them in scripts is pointless because everytime its for different purpose or for different files or commands.
Besides I have made a syntax error and I can not edit my comment now :)
If should be either like that:
% for LOG in *.log; do tail -5 ${LOG}; done
… or like that:
% for LOG in $( ls *.log ); do tail -5 ${LOG}; done
I really do use these POSIX for and while loops interactively all the time, not sure that this serves as a proof but:
Thanks for sharing the ZSH config. Mines is at about 230 lines which 1/4 is for all shells variables like PATH or less(1) settings and 3/4 for ZSH itself.
Note that fish is focused on being an interactive shell, so, if you primary metric is how easy it is to write a for loop, you are looking at the tool of a wrong class.
I personally use fish to type single-line commands in with out-of-the-box autosuggestions. If a need a for loop, I launch Julia.
EDIT: sorry, I’ve misread your comment. The general point stand, but for a different reason: POSIX compat is a non-goal of fish.
I write loops interactively in shell all the time. I wouldn’t consider a language suitable for interactive use as a shell if it lacked loops (or some other iteration mechanism).
How would you do what vermaden did up there with Julia? Seems to me like a huge overkill, but then again if you’re sufficiently proficient with Julia, it might make sense.
for log in filter(it->endswith(it, ".log"), readdir())
run(`tail -5 $log`)
end
The absence of globbing out of the box is indeed a pain.
On the other hand, I don’t need to worry about handling spaces in param substitution.
EDIT: to clarify, I din’t claim that Julia is better than zsh for scripting, it’s just the tool that I use personally. Although for me Julia indeed replaced both shell and Python scripts.
This would also work in Julia, is pretty short and shows how you can use patterns to find files (there’s Glob.jl, too, but I miss the ** glob from zsh):
[run(`tail -5 $log`) for log in readdir() if contains(log, r".log$")]
You might not know that endswith, contains and many other predicates in Julia have curried forms, so you could have written:
for log in filter(endswith(".log"), readdir())
run(`tail -5 $log`)
end
I agree that’s more typing! But pointlessness is in the fingers of typer, so to say.
I personally don’t know bash — it’s too quirky for me to learn it naturally. Moreover, for me there’s little value in POSIX: I am developing desktop software, so I care about windows as well. For this reason, I try not to invest into nix-only tech.
On the other hand, with Julia I get a well-designed programming language, which lets me to get the stuff done without knowing huge amount of trivia a la set -e or behavior of ${VAR} with spaces.
There’s also an irrational personal disagreement with stuff that’s quirky/poorly designed. If I were in a situation where I really needed a low-keystroke shell scripting language, I am afraid I’d go for picking some un-POSIX lisp, or writing my own lang.
To be honest, I find the snippet vermaden posted to already have lots of unessessairy typing. There’s no reason to use a loop if all you want is map commands to line. Xargs exists for this exact reason.
But more to the point, what’s stopping you from calling Bourne shell whenever you need? Fish clearly states in its manual that it is not intended to be yet another scripting language, which in my opinion is an important and useful divide. I still write shellscripts all the time and have been a fish user for 10 years.
Is this flame bait? If you get over the syntax hurdle, there are many reasons why for loops with globs, especially interactively, are better written in fish:
Try typing this in your terminal, with newlines:
for LOG in *.log
tail -5 $LOG
end
See? Who needs one-liners when you can have multiple in fish? This way, it stays easy to edit, read and navigate (2-dimensionally with the arrow keys) as you pile onto it.
In case your *.log expansion doesn’t match any file – compare this with any POSIX shell!
As a command argument…
ls *.log
…and in a for loop:
for LOG in *.log
echo found $LOG
end
As a command argument, fish prints an error, and doesn’t even run the command if the glob failed. In a for loop, the loop just iterates 0 times, with no error printed. In POSIX shelll, you can get either of these behaviours (by setting failglob or nullglob), but not both, which is a dilemma.
Recursive globs are on by default (not hidden behind the globstar flag) – who needs find anymore?
Its not my intention to force anyone to use ZSH or to discourage anyone from using FISH … besides I have made a syntax error and I can not edit my comment now :)
If should be either like that:
% for LOG in *.log; do tail -5 ${LOG}; done
… or like that:
% for LOG in $( ls *.log ); do tail -5 ${LOG}; done
I really do use these POSIX for and while loops interactively all the time, not sure that this serves as a proof but:
It isn’t different. The command line behaviour is.
Can you type such a multi-liner ↑ at the command line, edit it all at once (not just one line at a time, with no turning back, as in Bash), and retrieve it from history in its full glory?
I used zsh for a long time before switching to fish, and I basically found the interactive mode for fish to be a lot nicer—akin to zsh with oh-my-zsh, but even nicer, and faster. I absolutely switched for the interactive experience; most of my scripts are still written in bash for portability to my teammates.
The scripting changes make for a language that is a lot more internally consistent, so I have found that for my ad-hoc loops and stuff I do less googling to get it to work than I do using bash and zsh. Learning another shell syntax as idiosyncratic as bash would be very frustrating. At least with fish, it’s a very straightforward language.
If you are thinking about trying another shell, oil might be your jam, since it aims to be similar to POSIX syntax, but without as much ambiguity.
For lists I use orgzly with webdav sync. I have nginx serving a dav directory on a remote server I manage. To edit on my laptop I mount that dir with davfs and edit the notes with emacs org-mode.
For URLs, snippets of text, etc I want to transfer from my laptop clipboard to my phone clipboard I use: xsel -o | qrencode -t ANSIUTF8 and then just scan that code with an app on my phone.
For other random stuff I just use signal note to self.
I use i3lock. Its direct dependencies look reasonable, although I don’t know what they recursively expand to.
With that said, I don’t know whether it is “secure” or not because my threat model doesn’t really care if it is or not. I only use it to prevent cats and children from messing around on the keyboard. And for that, it works well.
It’s a great compromise when using X11, but the whole concept of screen savers on X11 is just so fragile. Actually suspending the session even if the screensaver should crash would be much cleaner (which is how every other platform, and also wayland handle it).
What I’m even more surprised about is that you said this compromise is possible with 25yo tech - why did no distro actually do any of this before?
jwz ragequit the software industry some 20 years ago and has been trolling the industry ever since. Just some context. He’s pretty funny but can be a bit of an ass at times 🤷
It’s not his job to put on a customer support demeanor while he says what he wants.
He gets to do as he likes. There are worse crimes than being an ass, such as being an ass to undeserving people perhaps. The configure script above is being an ass at the right people, even if it does editorialize (again, not a problem or crime, and really software could use attitudes!)
Especially in creative fields, you may choose to portray yourself any way you choose. You don’t owe anybody a pleasant attitude, unless of course you want to be pleasant to someone or everybody.
For some people, being pleasant takes a lot of work. I’m not paying those people, let alone to be pleasant, so why do I demand a specific attitude?
Being pleasant may take work, but being an asshole requires some effort too. Unless you are one to begin with and then it comes naturally of course. :D
How is the bc comment being an ass at the right people? Plenty of distros don’t ship with bc by default, you can just install it. What is a “standard part of unix” anyway?
Agree that CPU and disk (and maybe ram) haven’t improved enough to warrant a new laptop, but a 3200x1800 screen really is an amazing upgrade I don’t want to downgrade from.
I love my new 4k screen for text stuff.. Sadly on linux it seems to be pain in the ass to scale this appropriately and correctly. Even more with different resolutions between screens. So far windows does this quite well.
The text looks much crisper, so you can use smaller font sizes without straining your eyes if you want more screen real estate. Or you can just enjoy the increased readability.
Note: YMMV. Some people love it and report significantly reduced eye strain and increased legibility, some people don’t really notice a difference.
well you have a much sharper font and can go nearer if you want (like with books). I get eye strain over time from how pixelated text can appear at evening to me. Also you can watch higher res videos and all in all it looks really crisp. See also you smartphone, mine is already using a 2k screen, and you can see how clean text etc is.
You may want to just get an 2k screen (and maybe 144 FPS?) as that may already be enough for you. I just took the gamble and wanted to test it. Note that I probably got a modell with an inferior background lighting, so it’s not the same around the edges when I’m less than 50CM away. I also took the IPS panel for superior viewing angle as I’m using it for movie watching also. YMMV
My RTX 2070 GPU can’t play games like destiny on 4k 60 FPS without 100% GPU usage and FPS drops the moment I’m more than walking around. So I’ll definitely have to buy a new one if I want to use that.
I also just got a new 4k monitor, and that’s bothering me also. It’s only a matter of time before I fix the glitch with a second 4k monitor… Maybe after Christmas
I ended up doing that. It sucks, but Linux is just plain bad at HiDPI in a way Windows/macOS is not. I found a mixed DPI environment to be essentially impossible.
This is where I’m at too. I’m not sure I could go back to a 1024x768 screen or even a 1440x900 screen even. I have a 1900x1200 xps 13 that I really enjoy which is hooked up to a 3440x1440p ultrawide.
Might not need all the CPU power, but the screens are so so nice!
Oh I’d love to have a 4k laptop. I’m currently using a 12” Xiaomi laptop from 2017 with 4GB of RAM and a 2k display. After adding a Samsung 960 evo NVMe and increasing Linux swappiness this is more than enough for my needs - but a 4k display would just be terrific!
unbound treats the RPZ list or feed as if it were a real domain zone. So it will fetch it based on the TTL specified. Which is every 2 hours, based off looking at the top of the file.
Most of the non standard tools I use were already mentioned, but I recently discovered progress it’s useful for the times I forget to use pv or the files are bigger than I thought.
It’s not perfect, but it is the best alternative I used so far.
We use a GitOps model, our YAML (which is actually dhall) configuration is stored in a git repository, any changes in that repository trigger a pipeline that deploys the new configuration to our clusters (dev, staging, prod). It’s a bit more complicated than this, because there are helm charts involved and there is the CI/CD pipelines for each service running, but that is a summary of what happens.
This is not perfect at all, it has some problems, but sure beats running ansible on hundreds of EC2 machines and then managing monitoring, load balancing, and all that separately.
So now I am mostly curious what other people use instead of kubernetes, because as i said, it’s not perfect and I am always ready to try something better.
I want to set this up on my Raspberry Pi 4 that currently runs my DHCP via Pihole. Is there a good golang alternative for that?
Yes, blocky: https://github.com/0xERR0R/blocky
I used blocky for this and it works great. You could also run a container with pihole/adguard using podman.
Thank you so much for sharing!
I rediscovered fx yesterday (well, github recommended it to me) It’s nice, but I stick to jless for now
I also prefer jless, but your link is pointing to a page about berlin ;)
Whoops, here is the correct link https://jless.io/
Looks like the beginning of the end of the fantastic progress in tech that’s resulted from a relative lack of regulation.
Also, probably, a massive spike in grift jobs as people are hired to ensure compliance.
Looks like the beginning of the end for the unnecessarey e-waste provoked by companies forcing obselence and anti-consumer patterns made possible by the lack of regulations.
It’s amazing that no matter how good the news is about a regulation you’ll always be able to find someone to complain about how it harms some hypothetical innovation.
Sure. Possibly that too - although I’d be mildly surprised if the legislation actually delivers the intended upside, as opposed to just delivering unintended consequences.
And just to be clear: the unintended consequences here include the retardation of an industry that’s delivered us progress from 8 bit micros with 64KiB RAM to pervasive Internet and pocket supercomputers in one generation.
Edited to add: I run a refurbished W540 with Linux Mint as a “gaming” laptop, a refurbished T470s with FreeBSD as my daily driver, a refurbished Pixel 3 with Lineage as my phone, and a PineTime and Pine Buds Pro. I really do grok the issues with the industry around planned obsolescence, waste, and consumer hostility.
I just still don’t think the cost of regulation is worth it.
I’m a EU citizen, and I see this argument made every single time the EU passes a new legislation affecting tech. So far, those worries never materialized.
I just can’t see why having removeable batteries would hinder innovation. Each company will still want to sell their prducts, so they will be pressed to find creative ways to have a sleek design while meeting regulations.
Do you think Apple engineers are not capeable of designing AirPods that have a removeable battery? The battery is even in the stem, so it could be as simple as having the stem be de-tacheable. It was just simpler to super-glue everything shut, plus it comes with the benefit of forcing consumers to upgrade once their AirPods have unusable battery life.
Also, if I’m not mistaken it is about service-time replaceable battery, not “drop-on-the-floor-and-your-phone-is-in-6-parts” replaceable as in the old times.
In the specific case of batteries, yep, you’re right. The legislation actually carves special exception for batteries that’s even more manufacturer-friendly than other requirements – you can make devices with batteries that can only be replaced in a workshop environment or by a person with basic repair training, or even restrict access to batteries to authorised partners. But you have to meet some battery quality criteria and a plausible commercial reason for restricting battery replacement or access to batteries (e.g. an IP42 or, respectively, IP67 rating).
Yes, I know, what about the extra regulatory burden: said battery quality criteria are just industry-standard rating methods (remaining capacity after 500 and 1,000 cycles) which battery suppliers already provide, so manufacturers that currently apply the CE rating don’t actually need to do anything new to be compliant. In fact the vast majority of devices on the EU market are already compliant, if anyone isn’t they really got tricked by whoever’s selling them the batteries.
The only additional requirements set in place is that fasteners have to be resupplied or reusable. Most fasteners that also perform electrical functions are inherently reusable (on account of being metallic) so in practice that just means, if your batteries are fastened with adhesive, you have to provide that (or a compatible) adhesive for the prescribed duration. As long as you keep making devices with adhesive-fastened batteries that’s basically free.
i.e. none of this requires any innovation of any kind – in fact the vast majority of companies active on the EU market can keep on doing exactly what they’re doing now modulo exclusive supply contracts (which they can actually keep if they want to, but then they have to provide the parts to authorised repair partners).
Man do I ever miss those days though. Device not powering off the way I’m telling it to? Can’t figure out how to get this alarm app to stop making noise in this crowded room? Fine - rip the battery cover off and forcibly end the noise. 100% success rate.
You’re enjoying those ubiquitous “This site uses cookies” pop-ups, then?
Of course they’re capable, but there are always trade-offs. I am very skeptical that something as tiny and densely packed as an AirPod could be made with removeable parts without becoming a lot less durable or reliable, and/or more expensive. Do you have the hardware/manufacturing expertise to back up your assumptions?
I don’t know where the battery is in an AirPod, but I do know that lithium-polymer batteries can be molded into arbitrary shapes and are often designed to fill the space around the other components, which tends to make them difficult or impossible to remove.
Those aren’t required by law; those happen when a company makes customer-hostile decisions and wants to deflect the blame to the EU for forcing them to be transparent about their bad decisions.
Huh? Using cookies is “user-hostile”? I mean, I actually remember using the web before cookies were a thing, and that was pretty user-unfriendly: all state had to be kept in the URL, and if you hit the Back button it reversed any state, like what you had in your shopping cart.
That kind of cookie requires no popup though, only the ones used to shared info with third parties or collect unwarranted information.
I can’t believe so many years later people still believe the cookie law applies to all cookies.
Please educate yourself: the law explicitly applies only to cookies used for tracking and marketing purposes, not for funcional purposes.
The law also specified that the banner must have a single button to “reject all cookies”, so any website that ask you to go trought a complex flow to reject your consent is not compliant.
It requires consent for all but “strictly necessary” cookies. According to the definitions on that page, that covers a lot more than tracking and marketing. For example “ choices you have made in the past, like what language you prefer”, or “statistics cookies” whose “ sole purpose is to improve website function”. Definitely overreach.
we do know it and it’s a Li-Ion button cell https://guide-images.cdn.ifixit.com/igi/QG4Cd6cMiYVcMxiE.large
FWIW this regulation doesn’t apply to the Airpods. But if for some reason it ever did, and based on the teardown here, the main obstacle for compliance is that the battery is behind a membrane that would need to be destroyed. A replaceable fastener that would allow it to be vertically extracted, for example, would allow for cheap compliance. If Apple got their shit together and got a waterproof rating, I think they could actually claim compliance without doing anything else – it looks like the battery is already replaceable in a workshop environment (someone’s done it here) and you can still do that.
(But do note that I’m basing this off pictures, I never had a pair of AirPods – frankly I never understood their appeal)
Sure, Apple is capable of doing it. And unlike my PinePhone the result would be a working phone ;)
But the issue isn’t a technical one. It’s the costs involved in finding those creative ways, to hiring people to ensure compliance, and especially to new entrants to the field.
It’s demonstrably untrue that the costs never materialise. Speak to business owners about the cost of regulatory compliance sometime. Red tape is expensive.
What is the alternative?
Those companies are clearly engaging in anti-consumer behavior, actively trying to stop right to repair and more.
The industry demonstrated to be incapable of self-regulating, so I think it’s about time to force their hand.
This law can be read in its entirety in a few minutes, it’s reasonable and to the point.
Is that a trick question? The alternative is not regulating, and it’s delivered absolutely stunning results so far. Again: airgapped 8 bit desk toys to pocket supercomputers with pervasive Internet in a generation.
Edited to add: and this isn’t a new problem they’re dealing with; Apple has been pulling various customer-hostile shit moves since Jobs’ influence outgrew Woz’s:
(from https://www.folklore.org/StoryView.py?project=Macintosh&story=Diagnostic_Port.txt )
Edited to add, again: I mean this without snark, coming from a country (Australia) that despite its larrikin reuptation is astoundingly fond of red tape, regulation, conformity, and conservatism. But I think there’s a reason Silicon Valley is in America, and not either Europe or Australasia, and it’s cultural as much as it’s economic.
Did a standard electric plug also stiffle innovation? Or mandates about a car having to fit on a lane?
Laws are the most important safety lines we have, otherwise companies would just optimize for profit in malicious ways.
The reason is literally buckets and buckets of money from defense spending. You should already know this.
It’s not just that. Lots of people have studied this and one of the key reasons is that the USA has a large set of people with disposable income that all speaks the same language. There was a huge amount of tech innovation in the UK in the ’80s and ’90s (contemporaries of Apple, Microsoft, and so on) but very few companies made it to international success because their US competitors could sell to a market (at least) five times the size before they needed to deal with export rules or localisation. Most of these companies either went under because US companies had larger economies of scale or were bought by US companies.
The EU has a larger middle class than the USA now, I believe, but they speak over a dozen languages and expect products to be translated into their own locales. A French company doesn’t have to deal with export regulations to sell in Germany, but they do need to make sure that they translate everything (including things like changing decimal separators). And then, if they want to sell in Spain, they need to do all of that again. This might change in the next decade, since LLM-driven machine translation is starting to be actually usable (helped for the EU by the fact that the EU Parliament proceedings are professionally translated into all member states’ languages, giving a fantastic training corpus).
The thing that should worry American Exceptionalists is that the middle class in China is now about as large as the population of America and they all read the same language. A Chinese company has a much bigger advantage than a US company in this regard. They can sell to at least twice as many people with disposable income without dealing with export rules or localisation than a US company.
That’s one of the reasons but it’s clearly not sufficient. Other countries have spent up on taxpayer’s purse and not spawned a silicon valley of their own.
“Spent up”? At anything near the level of the USA??
Yeah.
https://en.m.wikipedia.org/wiki/History_of_computing_in_the_Soviet_Union
But they failed basically because of the Economic Calculation Problem - even with good funding and smart people, they couldn’t manufacture worth a damn.
https://en.m.wikipedia.org/wiki/Economic_calculation_problem
Money - wherever it comes from - is an obvious prerequisite. But it’s not sufficient - you need a (somewhat at least) free economy and a consequently functional manufacturing capacity. And a culture that rewards, not kills or jails, intellectual independence.
But they did spawn a Silicon Valley of their own:
https://en.wikipedia.org/wiki/Zelenograd
The Wikipedia cites a number of factors:
Government spending tends to help with these kind of things. As it did for the foundations of the Internet itself. Attributing most of the progress we had so far to lack of regulation is… unwarranted at best.
Besides, it’s not like anyone is advocating we go back in time and regulate the industry to prevent current problems without current insight. We have specific problems now that we could easily regulate without imposing too much a cost on manufacturers: there’s a battery? It must be replaceable by the end user. Device pairing prevents third party repairs? Just ban it. Or maybe keep it, but provide the tools to re-pair any new component. They’re using proprietary connectors? Consider standardising it all to USB-C or similar. It’s a game of whack-a-mole, but at least this way we don’t over-regulate.
Beware comrade, folks will come here to make a slippery slope arguments about how requiring battery replacements & other minor guard rails towards consumer-forward, e-waste-reducing design will lead to the regulation of everything & fully stifle all technological progress.
What I’d be more concerned is how those cabals weaponize the legislation in their favor by setting and/or creating the standards. I look at how the EU is saying all these chat apps need to quit that proprietary, non-cross-chatter behavior. Instead of reverting their code to the XMPP of yore, which is controlled by a third-party committee/community, that many of their chats were were designed after, they want to create a new standard together & will likely find a way to hit the minimum legal requirements while still keeping a majority of their service within the garden or only allow other big corporate players to adapt/use their protocol with a 2000-page specification with bugs, inconsistencies, & unspecified behavior.
Whack enough moles and over-regulation is exactly what you get - a smothering weight of decades of incremental regulation that no-one fully comprehends.
One of the reason the tech industry can move as fast as it does is that it hasn’t yet had the time to accumulate this - or the endless procession of grifting consultants and unions that burden other industries.
It isn’t exactly what you get. You’re not here complaining about the fact that your mobile phone electrocutes you or gives you RF burns of stops your TV reception - because you don’t realise that there is already lots of regulation from which you benefit. This is just a bit more, not the straw-man binary you’re making out it to be.
I am curious however: do you see the current situation as tenable? You mention above that there are anti-consumerist practices and the like, but also express concern that regulation will quickly slippery slope away, but I am curious if you think the current system where there is more and more lock in both on the web and in devices can be pried back from those parties?
Why are those results stunning? Is there any reason to think that those improvements were difficult in the first place?
There are a lot of economic incentives, and it was a new field of science application, that has benefited from so many other fields exploding at the same time.
It’s definitely not enough to attribute those results to the lack of regulation. The “utility function” might have just been especially ripe for optimization in that specific local area, with or without regulations.
Now, we see monopolies appearing again and associated anti-consumer decisions to the benefit of the bigger players. This situation is well-known – tragedy of the common situations in markets is never fixed by the players themselves.
Your alternative of not doing anything hinges on the hope that your ideologically biased opinion won’t clash with reality. It’s naive to believe corporations not to attempt to maximize their profits when they have an opportunity.
Well, I guess I am wrong then, but I prefer slower progress, slower computers, and generating less waste than just letting companies do all they want.
This did not happen without regulation. The FCC exists for instance. All of the actual technological development was funded by the government, if not conducted directly by government agencies.
As a customer, I react to this by never voluntarily buying Apple products. And I did buy a Framework laptop when it first became available, which I still use. Regulations that help entrench Apple and make it harder for new companies like Framework to get started are bad for me and what I care about with consumer technology (note that Framework started in the US, rather than the EU, and that in general Europeans immigrate to the US to start technology companies rather than Americans immigrating to the EU to do the same).
Which is reasonable. Earlier albertorestifo spoke about legislation “forc[ing] their hand” which is a fair summary - it’s the use of force instead of voluntary association.
(Although I’d argue that anti-circumvention laws, etc. prescribing what owners can’t do with their devices is equally wrong, and should also not be a thing).
The problem with voluntary association is that most people don’t know what they’re associating with when they buy a new product. Or they think short term, only to cry later when repairing their device is more expensive than buying a new one.
There’s a similar tension at play with GitHub’s rollout of mandatory 2FA: it really annoys me, adding TOTP didn’t improve my security by one iota (I already use KeepassXC), but many people do use insecure passwords, and you can’t tell by looking at their code. (In this analogy GitHub plays the role of the regulator.)
I mean, you’re not wrong. But don’t you feel like the solution isn’t to infantilise people by treating them like they’re incapable of knowing?
For what it’s worth I fully support legislation enforcing “plain $LANGUAGE” contracts. Fraud is a species of violence; people should understand what they’re signing.
But by the same token, if people don’t care to research the repair costs of their devices before buying them … why is that a problem that requires legislation?
They’re not, if we give them access to the information, and there are alternatives. If all the major phone manufacturers produce locked down phones with impossible to swap components (pairing), that are supported only for 1 year, what’s people to do? If people have no idea how secure the authentication of someone is on GitHub, how can they make an informed decision about security?
When important stuff like that is prominently displayed on the package, it does influence purchase decisions. So people do care. But more importantly, a bad score on that front makes manufacturers look bad enough that they would quickly change course and sell stuff that’s easier to repair, effectively giving people more choice. So yeah, a bit of legislation is warranted in my opinion.
I’m not a business owner in this field but I did work at the engineering (and then product management, for my sins) end of it for years. I can tell you that, at least back in 2016, when I last did any kind of electronics design:
Execs will throw their hands in the air and declare anything super-expensive, especially if it requires them to put managers to work. They aren’t always wrong but in this particular case IMHO they are. The additional design-time costs this bill imposes are trivial, and at least some of them can be offset by costs you save elsewhere on the manufacturing chain. Also, well-ran marketing and logistics departments can turn many of its extra requirements into real opportunities.
I don’t want any of these things more than I want improved waterproofing. Why should every EU citizen that has the same priorities I do not be able to buy a the device they want?
Then I have some very good news for you!
The law doesn’t prohibit waterproof devices. In fact, it makes clear expections for such cases. It mandates that the battery must be repleaceable without specialized tools and by any competent shop, it doesn’t mandate a user-replaceable battery.
I don’t want to defend the bill (I’m skeptical of politicians making decisions on… just about anything, given how they operate) but I don’t think recourse to history is entirely justified in this case.
For one thing, good repairability and support for most of (if not throughout) a device’s useful lifetime was the norm for a good part of that period, and it wasn’t a hardware-only deal. Windows 3.1 was supported until 2001, almost twice longer than the bill demands. NT 3.1 was supported for seven years, and Windows 95 was supported for 6. IRIX versions were supported for 5 (or 7?) years, IIRC.
For another, the current state of affairs is the exact opposite of what deregulation was supposed to achieve, so I find it equally indefensible on (de)regulatory grounds alone. Manufacturers are increasingly convincing users to upgrade not by delivering better and more capable products, but by making them both less durable and harder to repair, and by restricting access to security updates. Instead of allowing businesses to focus on their customers’ needs rather than state-mandated demands, it’s allowing businesses to compensate their inability to meet customer expectations (in terms of device lifetime and justified update threshold) by delivering worse designs.
I’m not against that on principle but I’m also not a fan of footing the bill for all the extra waste collection effort and all the health hazards that generates. Private companies should be more than well aware that there’s no such thing as a free lunch.
Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.
Deregulation is the “ground state”.
It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.
Conveniently, it fosters an environment of rapid growth in wealth, capability, and efficiency. Because when companies do what you’re suggesting - nerfing their products to drive revenue - customers go elsewhere.
Which is why you’ll see the greatest proponents of regulation are the companies themselves, these days. Anti-circumvention laws, censorship laws that are only workable by large companies, Government-mandated software (e.g. Korean banking, Android and iOS only identity apps in Australia) and so forth are regulation aimed against customers.
So there’s a part of me that thinks companies are reaping what they sowed, here. But two wrongs don’t make a right; the correct answer is to deregulate both ends.
Maybe. Most early home computers were expensive. People expected them to last a long time. In the late ’80s, most of the computers that friends of mine owned were several years old and lasted for years. The BBC Model B was introduced in 1981 and was still being sold in the early ‘90s. Schools were gradually phasing them out. Things like the Commodore 64 of Sinclair Spectrum had similar longevity. There were outliers but most of them were from companies that went out of business and so wouldn’t be affected by this kind of regulation.
That’s not really true. It assumes a balance of power that is exactly equal between companies and consumers.
Companies force people to upgrade by tying in services to the device and then dropping support in the services for older products. No one buys a phone because they want a shiny bit of plastic with a thinking rock inside, they buy a phone to be able to run programs that accomplish specific things. If you can’t safely connect the device to the Internet and it won’t run the latest apps (which are required to connect to specific services) because the OS is out of date, then they need to upgrade the OS. If they can’t upgrade the OS because the vendor doesn’t provide an upgrade and no one else can because they have locked down the bootloader (and / or not documented any of the device interfaces), then consumers have no choice to upgrade.
Only if there’s another option. Apple controls their app store and so gets a 30% cut of app revenue. This gives them some incentive to support old devices, because they can still make money from them, but they will look carefully at the inflection point where they make more money from upgrades than from sales to older devices. For other vendors, Google makes money from the app store and they don’t[1] and so once a handset has shipped, the vendor has made as much money as they possibly can. If a vendor makes a phone that gets updates longer, then it will cost more. Customers don’t see that at point of sale, so they don’t buy it. I haven’t read the final version of this law, one of the drafts required labelling the support lifetime (which research has shown will have a big impact - it has a surprisingly large impact on purchasing decisions). By moving the baseline up for everyone, companies don’t lose out by being the one vendor to try to do better.
Economists have studied this kind of market failure for a long time and no one who actually does research in economics (i.e. making predictions and trying to falsify them, not going on talk shows) has seriously proposed deregulation as the solution for decades.
Economies are complex systems. Even Adam Smith didn’t think that a model with a complete lack of regulation would lead to the best outcomes.
[1] Some years ago, the Android security team was complaining about the difficulties of support across vendors. I suggested that Google could fix the incentives in their ecosystem by providing a 5% cut of all app sales to the handset maker, conditional on the phone running the latest version of Android. They didn’t want to do that because Google maximising revenue is more important than security for users.
That is remarkably untrue. At least one entire school of economics proposes exactly that.
In fact, they dismiss the entire concept of market failure, because markets exist to provide pricing and a means of exchange, nothing more.
“Market failure” just means “the market isn’t producing the prices I want”.
Is the school of economics you’re talking about actual experimenters, or are they arm-chair philosophers? I trust they propose what you say they propose, but what actual evidence do they have?
I might sound like I’m dismissing an entire scientific discipline, but economics have shown strong signs of being extremely problematic on this front for a long time. One big red flag for instance is the existence of such long lived “schools”, which are a sign of dogma more than they’re a sign of sincere inquiry.
Assuming there’s no major misunderstanding, there’s another red flag right there: markets have a purpose now? Describing what markets do is one thing, but ascribing purpose to them presupposes some sentient entity put them there with intent. Which may very well be true, but then I would ask a historian, not an economist.
Now looking at the actual purpose… the second people exchange stuff for a price, there’s a pricing and a means of exchange. Those are the conditions for a market. Turning it around and making them the “purpose” of market is cheating: in effect, this is saying markets can’t fail by definition, which is quite unhelpful.
This is why I specifically said practicing economists who make predictions. If you actually talk to people who do research in this area, you’ll find that they’re a very evidence-driven social science. The people at the top of the field are making falsifiable predictions based on models and refining their models when they’re wrong.
Economics is intrinsically linked to politics and philosophy. Economic models are like any other model: they predict what will happen if you change nothing or change something, so that you can see whether that fits with your desired outcomes. This is why it’s so often linked to politics and philosophy: Philosophy and politics define policy goals, economics lets you reason about whether particular actions (or inactions) will help you reach those goals. Mechanics is linked to engineering in the same way. Mechanics tells you whether a set of materials arranged in a particular way will be stable, engineering says ‘okay, we want to build a bridge’ and then uses models from mechanics to determine whether the bridge will fall down. In both cases, measurement errors or invalid assumptions can result in the goals not being met when the models say that they should be and in both cases these lead to refinements of the models.
To people working in the field, the schools are just shorthand ways of describing a set of tools that you can use in various contexts.
Unfortunately, most of the time you hear about economics, it’s not from economists, it’s from people who play economists on TV. The likes of the Cato and Mises institutes in the article, for example, work exactly the wrong way around: they decide what policies they want to see applied and then try to tweak their models to justify those policies, rather than looking at what goals they want to see achieved and using the models to work out what policies will achieve those goals.
I really would recommend talking to economists, they tend to be very interesting people. And they hate the TV economists with a passion that I’ve rarely seen anywhere else.
Markets absolutely have a purpose. It is always a policy decision whether to allow a market to exist. Markets are a tool that you can use to optimise production to meet demand in various ways. You can avoid markets entirely in a planned economy (but please don’t, the Great Leap Forward or the early days of the USSR give you a good idea of how many people will die if you do). Something that starts as a market can end up not functioning as a market if there’s a significant power imbalance between producers and consumers.
Markets are one of the most effective tools that we have for optimising production for requirements. Precisely what they will optimise for depends a lot on the shape of the market and that’s something that you can control with regulation. The EU labelling rules on energy efficiency are a great example here. The EU mandated that white goods carry labels showing the score that they got on energy-efficiency tests. The labelling added information to customer and influenced their purchasing decisions. This created demand for more energy-efficient goods and the market responded by providing them. The eventually regulations banned goods below a certain efficiency rating but it was largely unnecessary because the market adjusted and most things were A rated or above when F ratings were introduced. It worked so well that they had to recalibrate the scale.
I can see how such usurpation could distort my view.
Well… yeah.
I love this example. Plainly shows that often people don’t make the choices they do because they don’t care about such and such criterion, they do so because they just can’t measure the criterion even if they cared. Even a Libertarian should admit that making good purchase decisions requires being well informed.
To be honest I do believe some select parts of the economy should be either centrally planned or have a state provider that can serve everyone: roads, trains, water, electricity, schools… Yet at the same time, other sectors probably benefit more from a Libertarian approach. My favourite example is the Internet: the fibre should be installed by public instances (town, county, state…), and bandwidth rented at a flat rate — no discount for bigger volumes. And then you just let private operators rent the bandwidth however they please, and compete among each other. The observed results in the few places in France that followed this plan (mostly rural areas big private providers didn’t want to invest in) was a myriad of operators of all sizes, including for-profit and non-profit ones (recalling what Benjamin Bayart said of the top of my head). This gave people an actual choice, and this diversity inherently makes this corner of the internet less controllable and freer.
A Libertarian market on top of a Communist infrastructure. I suspect we can find analogues in many other domains.
This is great initially, but it’s not clear how you pay for upgrades. Presumably 1 Gb/s fibre is fine now, but at some point you’re going to want to migrate everyone to 10 Gb/s or faster, just as you wanted to upgrade from copper to fibre. That’s going to be capital investment. Does it come from general taxation or from revenue raised on the operators? If it’s the former, how do you ensure it’s equitable, if it’s the latter then you’re going to want to amortise the cost across a decade and so pricing sufficiently that you can both maintain the current infrastructure and save enough to upgrade to as-yet-unknown future technology can be tricky.
The problem with private ownership of utilities is that it encourages rent seeking and cutting costs at the expense of service and capital investment. The problem with public ownership is that it’s hard to incentivise efficiency improvements. It’s important to understand the failure modes of both options and ideally design hybrids that avoid the worst problems of both. The problem is that most politicians start with ‘privatisation is good’ or ‘privatisation is bad’ as an ideological view and not ‘good service, without discrimination, at an affordable price is good’ and then try to figure out how to achieve it.
Yes, that’s the point: the most capitalistic something is (extreme example: nuclear power plants), the more difficult private enterprises will want to invest in it, and if they do, the more they will want to extract rent from their investment. There’s also the thing about fibre (or copper) being naturally monopolistic, at least if you have a mind to conserve resources and not duplicate lines all over the place.
So there is a point where people must want the thing badly enough that the town/county/state does the investment itself. As it does for any public infrastructure.
Not saying this would be easy though. The difficulties you foresee are spot on.
Ah, I see. Part of this can be solved by making sure the public part is stable, and the private part easy to invest on. For instance, we need boxes and transmitters and whatnot to lighten up the fibre. I speculate that those boxes are more liable to be improved than the fibre itself, so perhaps we could give them to private interests. But this is reaching the limits of my knowledge of the subject, I’m not informed enough to have an opinion on where the public/private frontier is best placed.
Good point, I’ll keep that in mind.
There’s a lot of nuance here. Private enterprise is quite good at high-risk investments in general (nuclear power less so because it’s regulated such that you can’t just go bankrupt and walk away, for good reasons). A lot of interesting infrastructure were possible because private investors gambled and a lot of them lost a big pile of money. For example, the Iridium satellite phone network cost a lot to deliver and did not recoup costs. The initial investors lost money, but then the infrastructure was for sale at a bargain price and so it ended up being operated successfully. It’s not clear to me how public investment could have matched that (without just throwing away tax payers’ money).
This was the idea behind some of the public-private partnership things that the UK government pushed in the ‘90s (which often didn’t work, you can read a lot of detailed analyses of why not if you search for them): you allow the private sector to take the risk and they get a chunk of the rewards if the risk pays off but the public sector doesn’t lose out if the risk fails. For example, you get a private company to build a building that you will lease from them. They pay all of the costs. If you don’t need the building in five years time then it’s their responsibility to find another tenant. If the building needs unexpected repairs, they pay for them. If everything goes according to plan, you pay a bit more for the building space than if you’d built, owned, and operated it yourself. And you open it out to competitive bids, so if someone can deliver at a lower cost than you could, you save money.
Some procurement processes have added variations on this where the contract goes to the second lowest bidder or they the winner gets paid what the next-lowest bidder asked for. The former disincentivises stupidly low bids (if you’re lower than everyone else, you don’t win), the latter ensures that you get paid as much as someone else thought they could deliver, reducing risk to the buyer. There are a lot of variations on this that are differently effective and some economists have put a lot of effort into studying them. Their insights, sadly, are rarely used.
The dangerous potholes throughout UK roads might warn you that this doesn’t always work.
Good point. We need to make sure that these gambles stay gambles, and not, say, save the people who made the bad choice. Save their company perhaps, but seize it in the process. We don’t want to share losses while keeping profits private — which is what happens more often than I’d like.
The intent is good indeed, and I do have an example of a failure in mind: water management in France. Much of it is under a private-public partnership, with Veolia I believe, and… well there are a lot of leaks, a crapton of water is wasted (up to 25% in some of the worst cases), and Veolia seems to be making little more than a token effort to fix the damn leaks. Probably because they don’t really pay for the loss.
It’s often a matter oh how much money you want to put in. Public French roads are quite good, even if we exclude the super highways (those are mostly privatised, and I reckon in even better shape). Still, point taken.
Were they actually successful, or did they only decrease operating energy use? You can make a device that uses less power because it lasts half as long before it breaks, but then you have to spend twice as much power manufacturing the things because they only last half as long.
I don’t disagree with your comment, by the way. Although, part of the problem with planned economies was that they just didn’t have the processing power to manage the entire economy; modern computers might make a significant difference, the only way to really find out would be to set up a Great Leap Forward in the 21st century.
I may be misunderstanding your question but energy ratings aren’t based on energy consumption across the device’s entire lifetime, they’re based on energy consumption over a cycle of operation of limited duration, or a set of cycles of operations of limited duration (e.g. a number of hours of functioning at peak luminance for displays, a washing-drying cycle for washer-driers etc.). You can’t get a better rating by making a device that lasts half as long.
Energy ratings and device lifetimes aren’t generally linked by any causal relation. There are studies that suggest the average lifetime for (at least some categories of) household appliances have been decreasing in the last decades, but they show about the same thing regardless of jurisdiction (i.e. even those without labeling or energy efficiency rules, or with different labeling rules) and it’s a trend that started prior to energy efficiency labeling legislation in the EU.
Not directly, but you can e.g. make moving parts lighter/thinner, so they take less power to move but break sooner as a result of them being thinner.
That’s good to hear.
For household appliances, energy ratings are given based on performance under full rated capacities. Moving parts account for a tiny fraction of that in washing machines and washer-driers, and for a very small proportion of the total operating power in dishwashers and refrigerators (and obviously no proportion for electronic displays and lighting sources). They’re also given based on measurements of KWh/cycle rounded to three decimal places.
I’m not saying making some parts lighter doesn’t have an effect for some of the appliances that get energy ratings, but that effect is so close to the rounding error that I doubt anyone is going to risk their warranty figures for it. Lighter parts aren’t necessarily less durable, so if someone’s trying to get a desired rating by lightening the nominal load, they can usually get the same MTTF with slightly better materials, and they’ll gladly swallow some (often all) of the upfront cost just to avoid dealing with added uncertainty of warranty stocks.
Much like orthodox Marxism-Leninism, the Austrian School describes economics by how it should be, not how it actually is.
The major problem with orphans was lack of access to proprietary parts – they were otherwise very repairable. The few manufacturers that can afford proprietary parts today (e.g. Apple) aren’t exactly at risk of going under, which is why that fear is all but gone today.
I have like half a dozen orphan boxes in my collection. Some of them were never sold on Western markets, I’m talking things like devices sold only on the Japanese market for a few years or Soviet ZX Spectrum clones. All of them are repairable even today, some of them even with original parts (except, of course, for the proprietary ones, which aren’t manufactured anymore so you can only get them from existing stocks, or use clone parts). It’s pretty ridiculous that I can repair thirty year-old hardware just fine but if my Macbook croaks, I’m good for a new one, and not because I don’t have (access to) equipment but because I can’t get the parts, and not because they’re not manufactured anymore but because no one will sell them to me.
Deregulation was certainly meant to achieve a lot of things in particular. Not just general outcomes, like a more competitive landscape and the like – every major piece of deregulatory legilslation has had concrete goals that it sought to achieve. Most of them actually achieved them in the short run – it was conserving these achievements that turned out to be more problematic.
As for companies not being able to force customers not to upgrade, repair or tinker with their devices, that is really not true. Companies absolutely can and do force customers to not upgrade or repair their devices. For example, they regularly use exclusive supply deals to ensure that customers can’t get the parts they need for it, which they can do without leveraging any government-mandated regulation.
Some of their means are regulation-based – e.g. they take them customers or third-parties to court (see e.g. Apple. For most devices, tinkering with them in unsupported ways is against the ToS, too, and while there’s always doubt on how much of that is legally enforceable in each jurisdiction out there, it still carries legal risk, in addition to the weight of force in jurisdictions where such provisions have actually been enforced.
This is very far from a state of minimal initiation of force. It’s a state of minimal initiation of force on the customer end, sure – customers have little financial power (both individually and in numbers, given how expensive organisation is), so in the absence of regulation they can leverage, they have no force to initiate. But companies have considerable resources of force at their disposal.
It’s not like there was heavy progress the last 10 years on smartphone hardware.
Since 2015 every smartphone is the same as the previous model, with a slightly better camera and a better chip. I don’t see how the regulation is making progress more difficult. IMHO it will drive innovation, phones will have to be made more durable.
And, for most consumers, the better camera is the only thing that they notice. An iPhone 8 is still massively overpowered for what a huge number of consumers need, and it was released five years ago. If anything, I think five years is far too short a time to demand support.
Until that user wants to play a mobile game–in which like PC hardware specs were propelled by gaming, so is the mobile market driven by games which I believe is now the most dominant gaming platform.
I don’t think the games are really that CPU / GPU intensive. It’s definitely the dominant gaming platform, but the best selling games are things like Candy Crush (which I admit to having spent far too much time playing). I just upgraded my 2015 iPad Pro and it was fine for all of the games that I tried from the app store (including the ones included with Netflix and a number of the top-ten ones). The only thing it struggled with was the Apple News app, which seems to want to preload vast numbers of articles and so ran out of memory (it had only 2 GiB - the iPhone version seems not to have this problem).
The iPhone 8 (five years old) has an SoC that’s two generations newer than my old iPad, has more than twice as much L2 cache, two high-performance cores that are faster than the two cores in mine (plus four energy-efficient cores, so games can have 100% use of the high-perf ones), and a much more powerful GPU (Apple in-house design replacing a licensed PowerVR one in my device). Anything that runs on my old iPad will barely warm up the CPU/GPU on an iPhone 8.
But a lot are intensive & enthusiasts often prefer it. But still those time-waster types and e-sports tend to run on potatoes to grab the largest audience.
Anecdotally, I recently was reunited with my OnePlus 1 (2014) running Lineage OS, & it was choppy at just about everything (this was using the apps from when I last used it (2017) in airplane mode so not just contemporary bloat) especially loading map tiles on OSM. I tried Ubuntu Touch on it this year (2023) (listed as great support) & was still laggy enough that I’d prefer not to use it as it couldn’t handle maps well. But even if not performance bottle-necked, efficiency is certainly better (highly doubt it’d save more energy than the cost of just keeping an old device, but still).
My OnePlus 5T had an unfortunate encounter with a washing machine and tumble dryer, so now the cellular interface doesn’t work (everything else does). The 5T replaced a first-gen Moto G (which was working fine except that the external speaker didn’t work so I couldn’t hear it ring. I considered that a feature, but others disagreed). The Moto G was slow by the end. Drawing maps took a while, for example. The 5T was fine and I’d still be using it if I hadn’t thrown it in the wash. It has an 8-core CPU, 8 GiB of RAM, and an Adreno 540 GPU - that’s pretty good in comparison to the laptop that I was using until very recently.
I replaced the 5T with a 9 Pro. I honestly can’t tell the difference in performance for anything that I do. The 9 Pro is 4 years newer and doesn’t feel any faster for any of the apps or games that I run (and I used it a reasonable amount for work, with Teams, Word, and PowerPoint, which are not exactly light apps on any platform). Apparently the GPU is faster and the CPU has some faster cores but I rarely see anything that suggests that they’re heavily loaded.
Original comment mentioned iPhone 8 specifically. Android situation is completely different.
Apple had a significant performance lead for a while. Qualcomm just doesn’t seem to be interested in making high-end chips. They just keep promising that their next-year flagship will be almost as fast as Apple’s previous-year baseline. Additionally there are tons of budget Mediatek Androids that are awfully underpowered even when new.
Flagship Qualcomm chips for Android chips been fine for years & more than competitive once you factor in cost. I would doubt anyone is buying into either platform purely based on performance numbers anyhow versus ecosystem and/or wanting hardware options not offered by one or the other.
That’s what I’m saying — Qualcomm goes for large volumes of mid-range chips, and does not have products on the high end. They aren’t even trying.
BTW, I’m flabbergasted that Apple put M1 in iPads. What a waste of a powerful chip on baby software.
Uh, what about their series 8xx SoC’s? On paper they’re comparable to Apple’s A-series, it’s the software that usually is worse.
Still a massacre.
Yeah, true, I could have checked myself. Gap is even bigger right now than two years ago.
Q is in self-inflicted rut enabled by their CDMA stranglehold. Samsung is even further behind because their culture doesn’t let them execute.
https://cdn.arstechnica.net/wp-content/uploads/2022/09/iPhone-14-Geekbench-5-single-Android-980x735.jpeg
https://cdn.arstechnica.net/wp-content/uploads/2022/09/iPhone-14-Geekbench-Multi-Android-980x735.jpeg
Those are some cherry-picked comparisons. Apple release on a different cadence. You check right now, & S23 beats up on it as do most flagship now. If you blur the timing, it’s all about the same.
It would cost them more to develop and commission to fabrication of a more “appropriate” chip.
The high-end Qualcomm is fine. https://www.gsmarena.com/compare.php3?idPhone1=12082&idPhone3=11861&idPhone2=11521#diff- (may require viewing as a desktop site to see 3 columns)
With phones of the same tier released before & after you can see benchmarks are all close as is battery life. Features are wildly different tho since Android can offer a range of different hardware.
It doesn’t for laptops[1], so I doubt it would for smartphones either.
[1] https://www.lowtechmagazine.com/2020/12/how-and-why-i-stopped-buying-new-laptops.html
I think you’re really discounting the experiences of consumers to say they don’t notice the UI and UX changes made possible on the Android platform by improvements in hardware capabilities.
I notice that you’re not naming any. Elsewhere in the thread, I pointed out that I can’t tell the difference between a OnePlus 5T and a 9 Pro, in spite of them being years apart in releases. They can run the same version of Android and the UIs seem identical to me.
I didn’t think I had to. Android 9, 10, 11, 12 have distinct visual styles, and between vendors this distinction can further - this may be less apparent on OnePlus as they use their own OxygenOS (AOSP upstream ofc) (or at least, used to), but consumers notice even if they can’t clearly relate what they’ve noticed.
I’m using LimeageOS and both phones are running updated versions of the OS. Each version has made the settings app more awful but I can’t point to anything that’s a better UI or anything that requires newer hardware. Rendering the UI barely wakes up the GPU on the older phone. So what is new, better, and is enabled by newer hardware?
I can’t argue either way for “better”, I’m not the market. Newer hardware generally has better capability for graphics processing, leading to more reactive displays at higher refresh rates, and enabling compositing settings and features that otherwise wouldn’t run at an acceptable frame rate.
LineageOS is an AOSP build specifically designed to run fast and support legacy hardware, and is designed to look the same on all that hardware. It’s not a fair comparison to what people like to see with smartphone interfaces and launchers etc.
So please name one of them. A 2017 phone can happily run a 1080p display at a fast enough refresh that I’ve no idea what it is because it’s faster than my eyes can detect, with a full compositing UI. Mobile GPUs have been fast enough to composite every UI element from a separate texture, running complex pixel shaders on them, for ten years. OS X started doing this on laptops over 15 years ago, with integrated Intel graphics cards that are positively anaemic in comparison to anything in a vaguely recent phone. Android has provided a compositing UI toolkit from day one. Flutter, with its 60FPS default, runs very happily on a 2017 phone.
If it helps, I’m actually using the Microsoft launcher on both devices. But, again, you’re claiming that there are super magic UI features that are enabled by new hardware without saying what they are.
All innovation isn’t equal. Innovation that isn’t wanted by customers or their suppliers is malinvestment - a waste of human capacity, wealth, and time.
What makes you think that this innovation is not wanted by customers?
There is innovation that is wanted by customers, but manufacturers don’t provide it because it goes against their interest. I think it’s a lie invisible-hand-believers tell themselves when claiming that customers have a choice between a fixable phone and a glued-phone with an appstore. Of course customers will chose the glued-phone with an app store, because they want a usable phone first. But this doesn’t mean they don’t want a fixable phone, it means that they were given a Hobson’s choice
The light-bulb cartel is the single worst example you could give; incandescent light-bulbs are dirt-cheap to replace and burning them hotter ends up improving the quality of their light (i.e. color) dramatically, while saving more in reduced power bills than they cost from shorter lifetimes. This 30min video by Technology Connections covers the point really well.
Okay, that was sloppy of me.
“Not wanted more than any of the other features on offer.”
“Not wanted enough to motivate serious investment in a competitor.”
That last is most telling.
This cynical view is unwarranted in the case of EU, which so far is doing pretty well avoiding regulatory capture.
EU has a history of actually forcing companies to innovate in important areas that they themselves wouldn’t want to, like energy efficiency and ecological impact. And their regulations are generally set to start with realistic requirements, and are tightened gradually.
Not everything will sort itself out with consumers voting with their wallets. Sometimes degenerate behaviors (like vendor lock-in, planned obsolescence, DRM, spyware, bricking hardware when subscription for it expires) universally benefit companies, so all choices suck in one way or another. There are markets with high barriers to entry, especially in high-end electronics, and have rent-seeking incumbents that work for their shareholders’ interests, not consumers.
Ecodesign worked out wonderfully for vacuum cleaners, but that’s an appliance that hasn’t meaningfully changed since the 1930s. (You could argue that stick vacuum cleaners are different, but ecodesign certainly didn’t prevent them from entering the market)
The smartphone market has obviously been stagnating for a while, so it’ll be interesting to see if ecodesign can shake it up.
I strongly disagree here. They’ve changed massively since the ’90s. Walking around a vacuum cleaner shop in the ’90s, you had two choices of core designs. The vast majority had a bag that doubled as an air filter, pulling air through the bag and catching dust on the way. This is more or less the ’30s design (though those often had separate filters - there were quite a lot of refinements in the ’50s and ’60s - in the ’30s they were still selling ones that required a central compressor in the basement with pneumatic tubes that you plugged the vacuum cleaner into in each room).
Now, if you buy a vacuum cleaner, most of them use centrifugal airflow to precipitate heavy dust and hair, along with filters to catch the finer dust. Aside from the fact that both move air using electric motors, this is a totally different design to the ’30s models and to most of the early to mid ’90s models.
More recently, cheap and high-density lithium ion batteries have made cordless vacuums actually useful. These have been around since the ‘90s but they were pointless handheld things that barely functioned as a dustpan and brush replacement. Now they’re able to replace mains-powered ones for a lot of uses.
Oh, and that’s not even counting the various robot ones that can bounce around the floor unaided. These, ironically, are the ones whose vacuum-cleaner parts look the most like the ’30s design.
Just to add to that, the efficiency of most electrical home appliances has improved massively since the early ‘90s. With a few exceptions, like things based on resistive heating, which can’t improve much because of physics (but even some of those got replaced by devices with alternative heating methods) contemporary devices are a lot better in terms of energy efficiency. A lot of effort went into that, not only on the electrical end, but also on the mechanical end – vacuum cleaners today may look a lot like the ones in the 1930s but inside, from materials to filters, they’re very different. If you handed a contemporary vacuum cleaner to a service technician from the 1940s they wouldn’t know what to do with it.
Ironically enough, direct consumer demand has been a relatively modest driver of ecodesign, too – most consumers can’t and shouldn’t be expected to read power consumption graphs, the impact of one better device is spread across at least a two months’ worth of energy bills, and the impact of better electrical filtering trickles down onto consumers, so they’re not immediately aware of it. But they do know to look for energy classes or green markings or whatever.
The eco labelling for white goods was one of the inspirations for this law because it’s worked amazingly well. When it was first introduced, most devices were in the B-C classification or worse. It turned out that these were a very good nudge for consumers and people were willing to pay noticeably more for higher-rated devices, to the point that it became impossible to sell anything with less than an A rating. They were forced to recalibrate the scheme a year or two ago because most things were A+ or A++ rated.
It turns out that markets work very well if customers have choice and sufficient information to make an informed choice. Once the labelling was in place, consumers were able to make an informed choice and there was an incentive for vendors to provide better quality on an axis that was now visible to consumers and so provided choice. The market did the rest.
Labeling works well when there’s a somewhat simple thing to measure to get the rating of each device - for a fridge it’s power consumption. It gets trickier when there’s no easy way to determine which of two devices is “better” - what would we measure to put a rating on a mobile phone or a computer?
I suppose the main problem is that such devices are multi-purpose - do I value battery life over FLOPS, screen brightness over resolution, etc. Perhaps there could be a multi-dimensional rating system (A for battery life, D for gaming performance, B for office work, …), but that gets unpractical very quickly.
There’s some research by Zinaida Benenson (I don’t have the publication to hand, I saw the pre-publication results) on an earlier proposal for this law that looked at adding two labels:
The proposal was that there would be statutory fines for devices that did not comply with the SLA outlined in those two labels but companies are free to put as much or as little as they wanted. Her research looked at this across a few consumer good classes and used the standard methodology where users were shown a small number of devices with different specs and different things on these labels and then asked to pick their preference. This was then used to vary price, features, and security SLA. I can’t remember the exact numbers but she found that users consistently were willing to select higher priced things with better security guarantees, and favoured them over some other features.
All the information I’ve read points to centrifugal filters not being meaningfully more efficient or effective than filter bags, which is why these centrifugal cylones are often backed up by traditional filters. Despite what James Dyson would have us believe, building vacuum cleaners is not like designing a Tokamak. I’d use them as an example of a meaningless change introduced to give consumers an incentive to upgrade devices that otherwise last decades.
Stick (cordless) vacuums are meaningfully different in that the key cleaning mechanism is no longer suction force. The rotating brush provides most of the cleaning action, coupled with a (relatively) weak suction provided by the cordless engines. This makes them vastly more energy-efficient, although this is probably cancelled out by the higher impact of production, and the wear and tear on the components.
It also might be a great opportunity for innovation in modular design. Say, Apple is always very proude when they come up with a new design. Remember a 15 min mini-doc on their processes when they introduced unibody macbooks? Or 10 min video bragging about their laminated screens?
I don’t see why it can’t be about how they designed a clever back cover that can be opened without tools to replace the battery and also waterproof. Or how they came up with a new super fancy screen glass that can survive 45 drops.
Depending on how you define “progress” there can be a plenty of opportunities to innovate. Moreover, with better repairability there are more opportunities for modding. Isn’t it a “progress” if you can replace one of the cameras on your iPhone Pro with, say, infrared camera? Definitely not a mainstream feature to ever come to mass-produced iPhone but maybe a useful feature for some professionals. With available schematics this might have a chance to actually come to market. There’s no chance for it to ever come to a glued solid rectangle that rejects any part but the very specific it came with from the factory.
Phones have not made meaningful progress since the first few years of the iphone. Its about time
That’s one way to think about it. Another is that shaping markets is one of the primary jobs of the government, and a representative government – which, for all its faults, the EU is – delegates this job to politics. And folks make a political decision on the balance of equities differently, and … well, they decide how the markets should look. I don’t think that “innovation” or “efficiency” at providing what the market currently provides is anything like a dispositive argument.
thank god
There’s a chance that tech companies start to make EU-only hardware.
This overall shift will favor long term R&D investments of the kind placed before our last two decades of boom. It will improve innovation in the same way that making your kid eats vegetables improves their health. This is necessary soil for future booms.
Anyone have invites? This is intriguing to me, I’m hopeful about the broader move away from VC funding, at least for things that aren’t really capital intensive.
Sure, I have 10. I’ll DM one to you. If anyone else wants one, please DM me here.
Update: I’ve given them all away.
I also have invites if pushcx runs out
I apparently have an account with invites too. DM me for one.
I also have invites if someone is interested
I also have invites.
…also, anyone we invite has invites.
I’m interested if anyone still has any left, thanks!
If you got one could you send one to me?
hey caleb, you may have already gotten this or moved on but in case you haven’t :) https://tildes.net/register?code=SQPWL-NO367-SAY33
thanks I will use it!
I’d be interested if anyone still has, thanks!
Can I have one?
Can you send me an invite if you got yours?
If you (or anyone else) have one remaining, I would be interested
Update, I have now distributed 10 invites to users here. Have fun!
There’s a limited-invite thing on their subreddit about every week or so, and you can also email invites@tildes.net (don’t know the success rate with that one).
I can attest that I have received an invite after a couple of days.
I had an email invite like 10 minutes after posting this, thanks ya’ll!
I’ve got 5 if anyone wants one
I am in the eu use joker.com and njalla and have no complaints.
I tried to use porkbun but for some reason I got flagged for fraud (normal credit card I use often, normal email, no VPN) they wanted a bunch of data including my passport to unlock my account so I just gave up. Actually at this point I gave up on us registrars and trying to find the cheapest and just decided I am willing to pay a bit more.
I am thankful for docker as well. But I am actually commenting here because you introduced me to Eyvind Earle’s paintings. The one you chose for that page is amazing, but he has others like “Green Hillside”. So good. Thank you for that.
Oh wow, yes, these are fantastic.
His style is reminiscent of Pierneef https://images.google.com/images?hl=en&source=hp&q=j.h.+pierneef
Really cool idea. Would be nice if there was a repository of recipes in this format and people could share recipes. I’ve been using http://based.cooking and it’s really nice to skip the life story of a recipe and all the ads and popups to subscribe.
Found this on their github org: https://github.com/cooklang/recipes
Well that’s going in my bookmarks, thanks for the link!
This looks neat, though I was wondering if I really have a use case for a recipe markdown language. Storing recipes from online so that:
a) I have a copy that won’t disappear and
b) I don’t have to scroll to the actual recipe three times while the ads load in
…might just be it.
That was a really easy to understand explanation of phantom types. thanks.
Anybody else thinks it’s a bit strange this is written in rust? also esbuild is written in go, I think. I mean.. should that not be kind of a red flag for javascript?
I mean Lua is written in C. CPython is written in C. Ruby… the list goes on.
I heavily embrace writing compilers and parsing tools in languages that can let you eke out performance and are probably better suited for the task!
Javascript has loads of advantages, but “every attribute access is a hash table lookup (until the JIT maybe fixes it for you)” seems like a silly cost to pay if you can avoid it for the thing that you really really really want to be fast.
Honestly I think the “write stuff in the language it is meant for” has cost so much dev time in the long term. It’s definitely valuable for prototyping and early versions, but for example templating language rendering not being written in tooling that tries to minimize allocations and other things means that everything is probably 3-5x slower than it could be. I get a bunch of hits on a blog post because people just search “pylint slow” in Google so much.
Meanwhile we have gotten many languages that are somewhat ergonomic but performant (Rust, to some extend Go or Java as well. Modern C++, kinda?). Obviously Pylint being Python means recruiting contributors is “easier” in theory, but in practice there’s a certain kind of person who works on these tools in general, and they’re probably not too uncomfortable with using many langauges IMO.
Could you elaborate why this is strange?
to me it’s a bit strange that javascript is such an old language but it’s either not so optimized/fast enough for people use it to build/compile javascript, or people don’t want to use it even though they are building a build tool for it?
Javascript has been going through a lot of optimizations, feature additions etc but it’s still a high level interpreted language. Compiling Javascript itself (or anything else) efficiently isn’t really among the primary goals of the language. I see no problem at opting for compiled languages optimized for performance for such a task. This does not diminish the language in my eyes because this is not what the language is designed for. Javascript may be terrible for a lot of reasons but this is not really a red flag for me.
See Why is esbuild fast? for some fundamental JavaScript flaws for this kind of workload.
Thanks for that, it’s interesting, though that was kind of my point.
Age of the language doesn’t make it fast or slow, neither make it more or less optimised. Rust and Go will be faster than JS mostly, because these do not need to have support for many of the JS features, that make it hard to compile to native code - runtime reflection on anything for example.
Producing code that executes quickly is just one of a large number of possible goals for a language. It is especially a goal for rust, but it’s not a priority for javascript.
This is perfectly normal. Languages that do prioritise speed of execution of the produced code make other trade-offs that would not be palatable to javascript use cases. The fact that someone is writing a bundler for javascript in another language is more a sign of the value that javascript brings, even to people familiar with other technologies.
JS is pretty fast when you stay on the optimal path, which is very tricky to do for anything bigger than a single hot loop. You usually need to depend on implementation details of JIT engines, and shape your code to their preference, which often results in convoluted code (e.g. hit a sweet spot of JIT’s inlining and code caching heuristics, avoid confusing escape analysis, don’t use objects that create GC pressure).
OTOH Rust is compiled ahead of time, and its language constructs are explicitly designed to be low-level and optimize in a predictable way. You don’t fight heuristics of GC and JIT optimization tiers. It’s easy to use stack instead of the heap, control inlining, indirections, dynamic dispatch. There’s tooling to inspect compiled code, so you can have a definitive answer whether some code is efficient or not.
Rust is a more fitting tool for high-performance parsing.
Is v8 being written in C++ a red flag too?
Or SpiderMonkey in C, C++, and Rust
Not really. Idiomatic JS code for this kind of workload will generate a lot of junk for the collector, and there’s also the matter of binding to various differing native APIs for file monitoring.
Where is
[PCRE]
coming from? My biggest problem with emacs regexes is getting the syntax right escaping a lot of chars.Had totally forgotten I had pcre-mode enabled, but it’s part of pcre2el. pcre-mode is marked experimental (advises a handful of functions). I’ve not yet run into issues, though I don’t typically need more than basic regexps. There are alternatives in there (ie. use pcre-query-replace-regexp) to avoid advices.
ah thanks, will try that. Maybe I cry a bit less time I have to use emacs regexes ;0
It would help if Firefox would actually make a better product that’s not a crappy Chrome clone. The “you need to do something different because [abstract ethical reason X]” doesn’t work with veganism, it doesn’t work with chocolate sourced from dubious sources, it doesn’t work with sweatshop-based clothing, doesn’t work with Free Software, and it sure as hell isn’t going to work here. Okay, some people are going to do it, but not at scale.
Sometimes I think that Mozilla has been infiltrated by Google people to sabotage it. I have no evidence for this, but observed events don’t contradict it either.
I agree, but the deck is stacked against Mozilla. They are a relatively small nonprofit largely funded by Google. Structurally, there is no way they can make a product that competes. The problem is simply that there is no institutional counterweight to big tech right now, and the only real solutions are political: antitrust, regulation, maybe creating a publicly-funded institution with a charter to steward the internet in the way Mozilla was supposed to. There’s no solution to the problem merely through better organizational decisions or product design.
I don’t really agree; there’s a lot of stuff they could be doing better, like not pushing out updates that change the colour scheme in such a way that it becomes nigh-impossible to see which tab is active. I don’t really care about “how it looks”, but this is just objectively bad. Maybe if you have some 16k super-HD IPS screen with perfect colour reproduction at full brightness in good office conditions it’s fine, but I just have a shitty ThinkPad screen and the sun in my home half the time (you know, like a normal person). It’s darn near invisible for me, and I have near-perfect eyesight (which not everyone has). I spent some time downgrading Firefox to 88 yesterday just for this – which it also doesn’t easily allow, not if you want to keep your profile anyway – because I couldn’t be arsed to muck about with userChrome.css hacks. Why can’t I just change themes? Or why isn’t there just a setting to change the colour?
There’s loads of other things; one small thing I like to do is not have a “x” on tabs to close it. I keep clicking it by accident because I have the motor skills of a 6 year old and it’s rather annoying to keep accidentally closing tabs. It used to be a setting, then it was about:config, then it was a userChrome.css hack, now it’s a userChrome.css hack that you need to explicitly enable in about:config for it to take effect, and in the future I probably need to sacrifice a goat to our Mozilla overlords if I want to change it.
I also keep accidentally bookmarking stuff. I press ^D to close terminal windows and sometimes Firefox is focused and oops, new bookmark for you! Want to configure keybinds for Firefox? Firefox say no; you’re not allowed, mere mortal end user; our keybinds are perfect and work for everyone, there must be something wrong with you if you don’t like it! It’s pretty darn hard to hack around this too – more time than I was willing to spend on it anyway – so I just accepted this annoyance as part of my life 🤷
“But metrics show only 1% of people use this!” Yeah, maybe; but 1% here and 5% there and 2% somewhere else and before you know it you’ve annoyed half (of not more) of your userbase with a bunch of stuff like that. It’s the difference between software that’s tolerable and software that’s a joy to use. Firefox is tolerable, but not a joy. I’m also fairly sure metrics are biased as especially many power users disable it, so while useful, blindly trusting it is probably not a good idea (I keep it enabled for this reason, to give some “power user” feedback too).
Hell, I’m not even a “power user” really; I have maybe 10 tabs open at the most, usually much less (3 right now) and most settings are just the defaults because I don’t really want to spend time mucking about with stuff. I just happen to be a programmer with an interest in UX who cares about a healthy web and knows none of this is hard, just a choice they made.
These are all really simple things; not rocket science. As I mentioned a few days ago, Firefox seems have fallen victim to a mistaken and fallacious mindset in their design.
Currently Firefox sits in a weird limbo that satisfies no one: “power users” (which are not necessarily programmers and the like, loads of people with other jobs interested in computers and/or use computers many hours every day) are annoyed with Firefox because they keep taking away capabilities, and “simple” users are annoyed because quite frankly, Chrome gives a better experience in many ways (this, I do agree, is not an easy problem to solve, but it does work “good enough” for most). And hey, even “simple” users occasionally want to do “difficult” things like change something that doesn’t work well for them.
So sure, while there are some difficult challenges Firefox faces in competing against Google, a lot of it is just simple every-day stuff where they just choose to make what I consider to be a very mediocre product with no real distinguishing features at best. Firefox has an opportunity to differentiate themselves from Chrome by saying “yeah, maybe it’s a bit slower – it’s hard and we’re working on that – but in the meanwhile here’s all this cool stuff you can do with Firefox that you can’t with Chrome!” I don’t think Firefox will ever truly “catch up” to Chrome, and that’s fine, but I do think they can capture and retain a healthy 15%-20% (if not more) with a vision that consists of more than “Chrome is popular, therefore, we need to copy Chrome” and “use us because we’re not Chrome!”
Speaking of key bindings, Ctrl + Q is still “quit without any confirmation”. Someone filed a bug requesting this was changeable (not even default changed), that bug is now 20 years old.
It strikes me that this would be a great first issue for a new contributor, except the reason it’s been unfixed for so long is presumably that they don’t want it fixed.
A shortcut to quit isn’t a problem, losing user data when you quit is a problem. Safari has this behaviour too, and I quite often hit command-Q and accidentally quit Safari instead of the thing I thought I was quitting (since someone on the OS X 10.8 team decided that the big visual clues differentiating the active window and others was too ugly and removed it). It doesn’t bother me, because when I restart Safari I get back the same windows, in the same positions, with the same tabs, scrolled to the same position, with the same unsaved form data.
I haven’t used Firefox for a while, so I don’t know what happens with Firefox, but if it isn’t in the same position then that’s probably the big thing to fix, since it also impacts experience across any other kind of browser restart (OS reboots, crashes, security updates). If accidentally quitting the browser loses you 5-10 seconds of time, it’s not a problem. If it loses you a load of data then it’s really annoying.
Firefox does this when closing tabs (restoring closed tabs usually restores form content etc.) but not when closing the window.
The weird thing is that it does actually have a setting to confirm when quitting, it’s just that it only triggers when you have multiple tabs or windows open and not when there’s just one tab 🤷
Does changing
browser.tabs.closeWindowWithLastTab
in about:config fix that?I have it set to
false
already, I tested it to make sure and it doesn’t make a difference (^W won’t close the tab, as expected, but ^Q with one tab will still just quit).One of the first things I do when setting up a new macOS user for myself is adding alt-command-Q in Preferences → Keyboard → Shortcuts → App Shortcuts for “Quit Safari” in Safari. Saves my sanity every day.
Does this somehow remove the default ⌘Q binding?
Yes, it changes the binding on the OS level, so the shortcut hint in the menu bar is updated to show the change
It overrides it - Safari’s menu shows ⌥⌘Q against “Quit Safari”.
You can do this in windows for firefox (or any browser) too with an autohotkey script. You can set it up to catch and handle a keypress combination before it reaches any other application. This will be global of course and will disable and ctrl-q hotkey in all your applications, but if you want to get into detail and write a more complex script you can actually check which application has focus and only block the combination for the browser.
This sounds like something Chrome gets right - if I hit CMD + Q I get a prompt saying “Hold CMD+Q to Quit” which has prevented me from accidentally quitting lots of times. I assumed this was MacOS behaviour, but I just tested Safari and it quit immediately.
Disabling this shortcut with
browser.quitShortcut.disabled
works for me, but I agree that bug should be fixed.That was fixed a long time ago, at least on Linux. When I press it, a modal says “You are about to close 5 windows with 24 tabs. Tabs in non-private windows will be restored when you restart.” ESC cancels.
That’s strange. I’m using latest Firefox, from Firefox, on Linux, and I don’t ever get a prompt. Another reply suggested a config tweak to try.
I had that problem for a while but it went away. I have
browser.quitShortcut.disabled
as false in about:config. I’m not sure if it’s a default setting or not.It seems that this defaults to false. The fact you have it false, but don’t experience the problem, is counter-intuitive to me. Anyway the other poster’s suggestion was to flip this, so I’ll try that. Thanks!
That does seem backwards. Something else must be overriding it. I’m using Ubuntu 20.04, if that matters. I just found an online answer that mentions the setting.
On one level, I disagree – I have zero problems with Firefox. My only complaint is that sometimes website that are built to be Chrome-only don’t work sometimes, which isn’t really Firefox’s problem, but the ecosystem’s problem (see my comment above about antitrust, etc). But I will grant you that Firefox’s UX could be better, that there are ways the browser could be improved in general. However, I disagree here:
I don’t think this is possible given the amount of resources Firefox has. No matter how much they improve Firefox, there are two things that are beyond their control:
Even the best product managers and engineers could not reverse Firefox’s design. We need a political solution, unless we want the web to become Google Web (tm).
You can. The switcher is at the bottom of the Customize Toolbar… view.
Hm, last time I tried this it didn’t do much of anything other than change the colour of the toolbar to something else or a background picture; but maybe it’s improved now. I’ll have a look next time I try mucking about with 89 again; thanks!
You might try the Firefox Colors extension, too. It’s a pretty simple custom theme builder.
https://color.firefox.com/ to save the trouble of searching.
I agree with Firefox’s approach of choosing mainstream users over power-users - that’s the only way they’ll ever have 10% or more of users. Firefox is doing things with theming that I wish other systems would do - they have full “fresco” themes (images?) in their chrome! It looks awesome! I dream about entire DEs and app suites built from the ground up with the same theme of frescoes (but with an different specific fresco for each specific app, perhaps tailored to that app). Super cool!
I don’t like the lack of contrast on the current tab, but “give users the choice to fix this very specific issue or not” tends to be extremely shortsighted - the way to fix it is to fix it. Making it optional means yet another maintenance point on an already underfunded system, and doesn’t necessarily even fix the problem for most users!
More importantly, making ultra-specific optionss like that is usually pushing decisions onto the user as a method of avoiding internal politicking/arguments, and not because pushing to the user is the optimal solution for that specific design aspect.
As for the close button, I am like you. You can set
browser.tabs.tabClipWidth
to1000
. Dunno if it is scheduled to be removed.As for most of the other grips, adding options and features to cater for the needs of a small portion of users has a maintenance cost. Maybe adding the option is only one line, but then a new feature needs to work with the option enabled and disabled. Removing options is just a way to keep the code lean.
My favorite example in the distribution world is Debian. Debian supports tries to be the universal OS. We are drowning with having to support everything. For examples, supporting many init systems is more work. People will get to you if there is a bug in the init system you don’t use. You spend time on this. At the end, people not liking systemd are still unhappy and switch to Devuan which supports less init systems. I respect Mozilla to keep a tight ship and maintaining only the features they can support.
Nobody would say anything if their strategy worked. The core issue is that their strategy obviously doesn’t work.
It ’s not even about that.
It’s removing things that worked and users liked by pretending that their preferences are invalid. (And every user belongs to some minority that likes a feature others may be unaware of.)
See the recent debacle of gradually blowing up UI sizes, while removing options to keep them as they were previously.
Somehow the saved cost to support some feature doesn’t seem to free up enough resources to build other things that entice users to stay.
All they do with their condescending arrogance on what their perfectly spherical idea of a standard Firefox user needs … is making people’s lives miserable.
They fired most of the people that worked on things I was excited about, and it seems all that’s left are some PR managers and completely out-of-touch UX “experts”.
It seems to me that having useful features is more important than having “lean code”, especially if this “lean code” is frustrating your users and making them leave.
I know it’s easy to shout stuff from the sidelines, and I’m also aware that there may be complexities I may not be aware of and that I’m mostly ignorant of the exact reasoning behind many decisions (most of us here are really, although I’ve seen a few Mozilla people around), but what I do know is that 1) Firefox as a product has been moving in a certain direction for years, 2) that Firefox has been losing users for years, 3) that I know few people who truly find Firefox an amazing browser that a joy to use, and that in light of that 4) keep doing the same thing you’ve been doing for years is probably not a good idea, and 5) that doing the same thing but doing it harder is probably an even worse idea.
I also don’t think that much of this stuff is all that much effort. I am not intimately familiar with the Firefox codebase, but how can a bunch of settings add an insurmountable maintenance burden? These are not “deep” things that reach in to the Gecko engine, just comparatively basic UI stuff. There are tons of projects with a much more complex UI and many more settings.
Hell, I’d argue that even removing the RSS was also a mistake – they should have improved it instead, especially after Google Reader’s demise there was a huge missed opportunity there – although it’s a maintenance burden trade-off I can understand it better, it also demonstrates a lack of vision to just say “oh, it’s old crufty code, not used by many (not a surprise, it sucked), so let’s just remove it, people can just install an add-on if they really want it”. This is also a contradiction with Firefox’s mantra of “most people use the defaults, and if it’s not used a lot we can just remove it”. Well, if that’s true then you can ship a browser with hardly any features at all, and since most people will use the defaults they will use a browser without any features.
Browsers like Brave and Vivaldi manage to do much of this; Vivaldi has an entire full-blown email client. I’d wager that a significant portion of the people leaving Firefox are actually switching to those browsers, not Chrome as such (but they don’t show up well in stats as they identify as “Chrome”). Mozilla nets $430 million/year; it’s not a true “giant” like Google or Apple, but it’s not small either. Vivaldi has just 55 employees (2021, 35 in 2017); granted, they do less than Mozilla, but it doesn’t require a huge team to do all of this.
And every company has limited resources; it’s not like the Chrome team is a bottomless pit of resources either. A number of people in this thread express the “big Google vs. small non-profit Mozilla”-sentiment here, but it doesn’t seem that clear-cut. I can’t readily find a size for the Chrome team on the ‘net, but I checked out the Chromium source code and let some scripts loose on that: there are ~460 Google people with non-trivial commits in 2020, although quite a bit seems to be for ChromeOS and not the browser part strictly speaking, so my guestimate is more 300 people. A large team? Absolutely. But Mozilla’s $430/million a year can match this with ~$1.5m/year per developer. My last company had ~70 devs on much less revenue (~€10m/year). Basically they have the money to spare to match the Chrome dev team person-for-person. Mozilla does more than just Firefox, but they can still afford to let a lot of devs loose on Gecko/Firefox (I didn’t count the number devs for it, as I got some other stuff I want to do this evening as well).
It’s all a matter of strategy; history is littered with large or even huge companies that went belly up just because they made products that didn’t fit people’s demands. I fear Firefox will be in the same category. Not today or tomorrow, but in five years? I’m not so sure Firefox will still be around to be honest. I hope I’m wrong.
As for your Debian comparison; an init system is a fundamental part of the system; it would be analogous to Firefox supporting different rendering or JS engines. It’s not even close to the same as “an UI to configure key mappings” or “a bunch of settings for stuff you can actually already kind-of do but with hacks that you need to explicitly search for and most users don’t know it exists”, or even a “built-in RSS reader that’s really good and a great replacement for Google Reader”.
I agree with most of what you said. Notably the removal of RSS support. I don’t work for Mozilla and I am not a contributor, so I really can’t answer any of your questions.
Another example of maintaining a feature would be Alsa support. It has been removed, this upsets some users, but for me, this is understandable as they don’t want to handle bug reports around this or the code to get in the way of some other features or refactors. Of course, I use Pulseaudio, so I am quite biased.
I think ALSA is a bad example; just use Pulseaudio. It’s long since been the standard, everyone uses it, and this really is an example of “147 people who insist on having an überminimal Linux on Reddit being angry”. It’s the kind of technical detail with no real user-visible changes that almost no one cares about. Lots of effort with basically zero or extremely minimal tangible benefits.
And ALSA is a not even a good or easy API to start with. I’m pretty sure that the “ALSA purists” never actually tried to write any ALSA code otherwise they wouldn’t be ALSA purists but ALSA haters, as I’m confident there is not a single person that has programmed with ALSA that is not an ALSA hater to some degree.
Pulseaudio was pretty buggy for a while, and its developer’s attitude surrounding some of this didn’t really help, because clearly if tons of people are having issues then all those people are just “doing it wrong” and is certainly not a reason to fix anything, right? There was a time that I had a keybind to
pkill pulseaudio && pulseaudio --start
because the damn thing just stopped working so often. The Grand Pulseaudio Rollout was messy, buggy, broke a lot of stuff, and absolutely could have been handled better. But all of that was over a decade ago, and it does actually provide value. Most bugs have been fixed years ago, Poettering hasn’t been significantly involved since 2012, yet … people still hold an irrational hatred towards it 🤷ALSA sucks, but PulseAudio is so much worse. It still doesn’t even actually work outside the bare basics. Firefox forced me to put PA on and since then, my mic randomly spews noise and sound between programs running as different user ids is just awful. (I temporarily had that working better though some config changes, then a PA update - hoping to fix the mic bug - broke this… and didn’t fix the mic bug…)
I don’t understand why any program would use the PA api instead of the alsa ones. All my alsa programs (including several I’ve made my own btw, I love it whenever some internet commentator insists I don’t exist) work equally as well as pulse programs on the PA system… but also work fine on systems where audio actually works well (aka alsa systems). Using the pulse api seems to be nothing but negatives.
Not sure if this will help you but I absolutely cannot STAND the default Firefox theme so I use this: https://github.com/ideaweb/firefox-safari-style
I stick with Firefox over Safari purely because it’s devtools are 100x better.
There’s also the fact that web browsers are simply too big to reimplement at this point. The best Mozilla can do (barely) is try to keep up with the Google-controlled Web Platform specs, and try to collude with Apple to keep the worst of the worst from being formally standardized (though Chrome will implement them anyway). Their ability to do even that was severely impacted by their layoffs last year. At some point, Apple is going to fold and rebase Safari on Chromium, because maintaining their own browser engine is too unprofitable.
At this point, we need to admit that the web belongs to Google, and use it only to render unto Google what is Google’s. Our own traffic should be on other protocols.
For a scrappy nonprofit they don’t seem to have any issues paying their executives millions of dollars.
I mean, I don’t disagree, but we’re still talking several orders of magnitude less compensation than Google’s execs.
A shit sandwich is a shit sandwich, no matter how low the shit content is.
(And no, no one is holding a gun to Mozilla’s head forcing them to hire in high-CoL/low-productivity places.)
Product design can’t fix any of these problems because nobody is paying for the product. The more successful it is, the more it costs Mozilla. The only way to pay the rent with free-product-volume is adtech, which means spam and spying.
Exactly why I think the problem requires a political solution.
I don’t agree this is a vague ethical reason. Problem with those are concerns like deforestation (and destruction of habitats for smaller animals) to ship almond milk across the globe, and sewing as an alternative to poverty and prostitution, etc.
The browser privacy question is very quantifiable and concrete, the source is in the code, making it a concrete ethical-or-such choice.
ISTR there even being a study or two where people were asked about willingness to being spied upon, people who had no idea their phones were doing what was asked about, and being disconcerted after the fact. That’s also a concrete way to raise awareness.
At the end of the day none of this may matter if people sign away their rights willingly in favor of a “better” search-result filter bubble.
I don’t think they’re vague (not the word I used) but rather abstract; maybe that’s no the best word either but what I mean with it is that it’s a “far from my bed show” as we would say in Dutch. Doing $something_better on these topics has zero or very few immediate tangible benefits, but rather more abstract long-term benefits. And in addition it’s also really hard to feel that you’re really making a difference as a single individual. I agree with you that these are important topics, it’s just that this type of argument is simply not all that effective at really making a meaningful impact. Perhaps it should be, but it’s not, and exactly because it’s important we need to be pragmatic about the best strategy.
And if you’re given the choice between “cheaper (or better) option X” vs. “more expensive (or inferior) option Y with abstract benefits but no immediate ones”, then I can’t really blame everyone for choosing X either. Life is short, lots of stuff that’s important, and can’t expect everyone to always go out of their way to “do the right thing”, if you can even figure out what the “right thing” is (which is not always easy or black/white).
My brain somehow auto-conflated the two, sorry!
I think we agree that the reasoning in these is inoptimal either way.
Personally I wish these articles weren’t so academic, and maybe not in somewhat niche media, but instead mainstream publications would run “Studies show people do not like to be spied upon yet they are - see the shocking results” clickbaity stuff.
At least it wouldn’t hurt for a change.
It probably wasn’t super-clear what exactly was intended with that in the first place so easy enough of a mistake to make 😅
As for articles, I’ve seen a bunch of them in mainstream Dutch newspapers in the last two years or so; so there is some amount of attention being given to this. But as I expended on in my other lengthier comment, I think the first step really ought to be making a better product. Not only is this by far the easiest to do and within our (the community’s) power to do, I strongly suspect it may actually be enough, or at least go a long way.
It’s like investing in public transport is better than shaming people for having a car, or affordable meat alternatives is a better alternative than shaming people for eating meat, etc.
I agree to an extent. Firefox would do well to focus on the user experience front.
I switched to Firefox way back in the day, not because of vague concerns about the Microsoft hegemony, or even concerns about web standards and how well each browser implemented them. I switched because they introduced the absolutely groundbreaking feature that is tabbed browsing, which gave a strictly better user experience.
I later switched to Chrome when it became obvious that it was beating Firefox in terms of performance, which is also a factor in user experience.
What about these days? Firefox has mostly caught up to Chrome on the performance point. But you know what’s been the best user experience improvement I’ve seen lately? Chrome’s tab groups feature. It’s a really simple idea, but it’s significantly improved the way I manage my browser, given that I tend to have a huge number of tabs open.
These are the kinds of improvements that I’d like to see Firefox creating, in order to lure people back. You can’t guilt me into trying a new browser, you have to tempt me.
Opera had this over ten years ago (“tab stacking”, added in Opera 11 in 2010). Pretty useful indeed, even with just a limited number of tabs. It even worked better than Chrome groups IMO. Firefox almost-kind-of has this with container tabs, which are a nice feature actually (even though I don’t use it myself), and with a few UX enhancements on that you’ve got tab groups/stacking.
Opera also introduced tabbed browsing by the way (in 2000 with Opera 4, about two years before Mozilla added it in Phoenix, which later became Firefox). Opera was consistently way ahead of the curve on a lot of things. A big reason it never took off was because for a long time you had to pay for it (until 2005), and after that it suffered from “oh, I don’t want to pay for it”-reputation for years. It also suffered from sites not working; this often (not always) wasn’t even Opera’s fault as frequently this was just a stupid pointless “check” on the website’s part, but those were popular in those days to tell people to not use IE6 and many of them were poor and would either outright block Opera or display a scary message. And being a closed-source proprietary product also meant it never got the love from the FS/OSS crowd and the inertia that gives (not necessarily a huge inertia, but still).
So Firefox took the world by storm in the IE6 days because it was free and clearly much better than IE6, and when Opera finally made it free years later it was too late to catch up. I suppose the lesson here is that “a good product” isn’t everything or a guarantee for success, otherwise we’d all be using Opera (Presto) now, but it certainly makes it a hell of a lot easier to achieve success.
Opera had a lot of great stuff. I miss Opera 😢 Vivaldi is close (and built by former Opera devs) but for some reason it’s always pretty slow on my system.
This is fair and I did remember Opera being ahead of the curve on some things. I don’t remember why I didn’t use it, but it being paid is probably why.
I agree, I loved the Presto-era Opera and I still use the Blink version as my main browser (and Opera Mobile on Android). It’s still much better than Chrome UX-wise.
I haven’t used tab groups, but it looks pretty similar to Firefox Containers which was introduced ~4 years ahead of that blog post. I’ll grant that the Chrome version is built-in and looks much more polished and general purpose than the container extension, so the example is still valid.
I just wanted to bring this up because I see many accusations of Firefox copying Chrome, but I never see the reverse being called out. I think that’s partly because Chrome has the resources to take Mozilla’s ideas and beat them to market on it.
Disclaimer: I’m a Mozilla employee
One challenge for people making this kind of argument is that predictions of online-privacy doom and danger often don’t match people’s lived experiences. I’ve been using Google’s sites and products for over 20 years and have yet to observe any real harm coming to me as a result of Google tracking me. I think my experience is typical: it is an occasional minor annoyance to see repetitive ads for something I just bought, and… that’s about the extent of it.
A lot of privacy advocacy seems to assume that readers/listeners believe it’s an inherently harmful thing for a company to have information about them in a database somewhere. I believe privacy advocates generally believe that, but if they want people to listen to arguments that use that assumption as a starting point, they need to do a much better job offering non-circular arguments about why it’s bad.
I think it has been a mistake to focus on loss of privacy as the primary data collection harm. To me the bigger issue is that it gives data collectors power over the creators of the data and society as a whole, and drives destabilizing trends like political polarization and economic inequality. In some ways this is a harder sell because people are brainwashed to care only about issues that affect them personally and to respond with individualized acts.
There is no brainwashing needed for people to act like people.
do you disagree with something in my comment?
I’m not @halfmanhalfdonut but I don’t think that brainwashing is needed to get humans to behave like this. This is just how humans behave.
Yep, this is what I was saying.
things like individualism, solidarity, and collaboration exist on a spectrum, and everybody exhibits each to some degree. so saying humans just are individualistic is tautological, meaningless. everyone has some individualism in them regardless of their upbringing, and that doesn’t contradict anything in my original comment. that’s why I asked if there was some disagreement.
to really spell it out, modern mass media and culture condition people to be more individualistic than they otherwise would be. that makes it harder to make an appeal to solidarity and collaboration.
@GrayGnome
I think we’re going to have to agree to disagree. I can make a complicated rebuttal here, but it’s off-topic for the site, so cheers!
cheers
I think you’re only seeing the negative side (to you) of modern mass media and culture. Our media and culture also promote unity, tolerance, respect, acceptance, etc. You’re ignoring that so that you can complain about Google influencing media, but the reality is that the way you are comes from those same systems of conditioning.
The fact that you even know anything about income inequality and political polarization are entirely FROM the media. People on the whole are not as politically divided as media has you believe.
sure, I only mentioned this particular negative aspect because it was relevant to the point I was making in my original comment
I agree with everything you’ve written in this thread, especially when it comes to the abstractness of pro-Firefox arguments as of late. Judging from the votes it seems I am not alone. It is sad to see Mozilla lose the favor of what used to be its biggest proponents, the “power” users. I truly believe they are digging their own grave – faster and faster it seems, too. It’s unbelievable how little they seem to be able to just back down and admit they were wrong about an idea, if only for a single time.
Firefox does have many features that Chrome doesn’t have: container tabs, tree style tabs, better privacy and ad-blocking capabilities, some useful dev tools that I don’t think Chrome has (multi-line JS and CSS editors, fonts), isolated profiles, better control over the home screen, reader mode, userChrome.css, etc.
Is anyone here using nim for something serious in production? It looks really nice and I am surprised is not more popular.
How about the Nim Forum? The source code is at nim-lang/nim-forum on github.
This really shows off the capabilities of the language: both the backend and frontend are written in Nim. The frontend is Nim compiled down to a JavaScript SPA.
I’m the kind of person who is sensitive to latency; I dislike most JS-heavy browser-based things. The Nim Forum is as responsive as a JS-free / minimal JS site.
Not sure what counts as “serious in production”, but I’ve been running a kernel-syslogd on four or so machines ever since I wrote kslog. I also have several dozen command-line utilities including replacements for ls and procps as well as a unified diff highlighter. The Status IM Ethereum project has also been investing heavily in Nim.
I’ve been working on a key-value data store; it’s a wrapper around the C library libmdbx (an extension of LMDB), but with an idiomatic Nim API, and I’m working on higher level features like indexes.
I can not use FISH shell as it does not support such basic POSIX for loops as these:
Its pointless to learn another new FISH syntax just for this …
If it would support the I could try it and maybe switch from ZSH … but once you setup your ZSH shell there is no point in using any other shell then ZSH …
The syntax change is minimal:
Like mentioned in the sibling comment, any ad-hoc pipeline that gets involved should probably be a POSIX script for portability and reusability.
The time commitment and fragility of a comparable ZSH setup is why I switched to Fish. Compare my .zshrc at 239 lines and my config.fish at 52 lines.
But what’s the advantage of the different syntax? Most people considering Fish will already be familiar with POSIX for loops. and will still be writing POSIX for loops for both shell scripts and for interactive shells on other systems. Is the extra “do” so annoying that it’s worth the extra overhead of constantly switching between the different shell for loop syntaxes?
This is literally the main reason why I’m not using fish. I appreciate the good out-of-the-box configuration. I’m painfully familiar with ZSH’s fragility; it even made me switch back to bash. I would love a good, modern, pretty, nice-out-of-the-box shell. I just don’t want to use a non-POSIX shell. When I’m just writing pipelines and for loops interactively, POSIX shell’s issues aren’t really relevant When I’m writing anything complex enough for POSIX shell’s issues to be relevant, it’s in a shell script in a file, and I don’t want all my shell scripts to use some weird non-standard syntax which will preclude me from switching to a different shell in the future. So fish’s “improved” syntax is a disadvantage for interactive use, and isn’t something I would use for non-interactive use anyways.
Also, the official documentation tells you to run
chsh -s <path to fish>
to switch to Fish. Well, large parts of UNIX systems expect $SHELL to be POSIX-compatible. If you follow the official documentation your Sway configuration will break, all Makefiles will break, your sxhkd config will break, and lots of other programs will break. If it’s going to recommend switching to Fish withchsh
, it really should be POSIX compatible.IMO, fish is an amazing shell made less relevant through insistence on having a syntax which doesn’t even remotely resemble POSIX shell syntax.
This is simply not true. I use fish as my default shell and never had a problem with makefiles or my window manager, I use i3, but don’t know why sway would be different. I never encountered software that just runs scripts like that, they either have a #! line that specifies the shell or just call
bash -c
/sh -c
directly or whatever.IMO lack of Posix compliance in fish is a non issue, specially for experienced users which will know how to fallback to bash or write scripts and use
#!
. I used zsh for a long time and I’d use bash for scripts I could share with my team and everything just works. I feel like I could have switched to fish a lot sooner if I just tried instead of being put off by comments like these. If you are curious, just try it, maybe it’s for you, maybe it’s not, but don’t rely on other people’s opinion.For me the biggest advantage of fish is I can easily understand my config. With zsh I had a bunch of plugins and configs to customize it and I understand almost none of it. Every time I wanted to change it I would lose a lot of time. Documentation was also a pain, searching for obscure features you had to read random forums and advice and try different things. fish has a great manual, and great man pages, everything is just easy to learn and lookup. I value that more than I value POSIX compliance, maybe you don’t, but form your own opinion.
I’m happy that you haven’t experienced issues. I know 100% for a fact that having a non-POSIX $SHELL was causing a lot of issues for me last time I tried using fish. Maybe you’ve been lucky, or maybe they have a workaround now.
That’s fine. I agree that the things you mention are advantages of fish. I was wondering what the advantage of a different syntax is. Like, in which ways would fish but with a POSIX syntax be worse than the current implementation of fish? In which situations is it an advantage to have a POSIX-incompatible syntax?
There is the right and the wrong way to switch to fish: The right way is to set the login shell for a user (i.e. replace /bin/bash with /usr/bin/fish in /etc/passwd, either manually or with chsh).
The wrong way is to point /bin/sh at /usr/bin/fish: That, and only that symlink, is what matters to everything that implicitly invokes “the shell” (i.e. /bin/sh) without a hashbang, such as Makefiles. I’m not surprised at the carnage you described if you did this.
I, too, used
fish
for a while and did observe breakage, and I for sure did not do anything as silly as that. I remember in particular this bug: https://github.com/fish-shell/fish-shell/issues/2292 After that I changed my shell tobash
and didexec fish
in.bashrc
. This, IIRC, did fix most of the bugs, though I still had to be careful: some scripts don’t actually use a shebang and expect the shell to notice that the executable is a shell script.For example:
Then, from bash:
And from fish:
That I don’t know. It’s possible that fish would be better if it was POSIX compatible, I was just saying that even though it is not POSIX compatible, it’s still worth using. I think fish syntax is better for interactive use than bash/zsh, but that is just my opinion. For script use I use bash anyway. One exception is when writing custom completions for my custom commands and then I am oh so grateful I am not using bash/zsh and not using that syntax.
Could not agree more.
If one day FISH shell will also accept POSIX syntax for the
while
andfor
loops then I can look into it.I have tons of scripts but I also use these
for
andwhile
POSIX loops all the time … and putting them in scripts is pointless because everytime its for different purpose or for different files or commands.Besides I have made a syntax error and I can not edit my comment now :)
If should be either like that:
… or like that:
I really do use these POSIX for and while loops interactively all the time, not sure that this serves as a proof but:
Thanks for sharing the ZSH config. Mines is at about 230 lines which 1/4 is for all shells variables like PATH or less(1) settings and 3/4 for ZSH itself.
Note that fish is focused on being an interactive shell, so, if you primary metric is how easy it is to write a for loop, you are looking at the tool of a wrong class.
I personally use fish to type single-line commands in with out-of-the-box autosuggestions. If a need a for loop, I launch Julia.
EDIT: sorry, I’ve misread your comment. The general point stand, but for a different reason: POSIX compat is a non-goal of fish.
I write loops interactively in shell all the time. I wouldn’t consider a language suitable for interactive use as a shell if it lacked loops (or some other iteration mechanism).
How would you do what vermaden did up there with Julia? Seems to me like a huge overkill, but then again if you’re sufficiently proficient with Julia, it might make sense.
The absence of globbing out of the box is indeed a pain.
On the other hand, I don’t need to worry about handling spaces in param substitution.
EDIT: to clarify, I din’t claim that Julia is better than zsh for scripting, it’s just the tool that I use personally. Although for me Julia indeed replaced both shell and Python scripts.
This would also work in Julia, is pretty short and shows how you can use patterns to find files (there’s Glob.jl, too, but I miss the
**
glob from zsh):You might not know that
endswith
,contains
and many other predicates in Julia have curried forms, so you could have written:Thanks, I didn’t know about that!
Don’t you need
\.
in regex though?You’re welcome! And haha, yes, I should have escaped the dot :)
IMHO that is a lot of pointless typing instead of just respecting the standards - like POSIX one.
I agree that’s more typing! But pointlessness is in the fingers of typer, so to say.
I personally don’t know bash — it’s too quirky for me to learn it naturally. Moreover, for me there’s little value in POSIX: I am developing desktop software, so I care about windows as well. For this reason, I try not to invest into nix-only tech.
On the other hand, with Julia I get a well-designed programming language, which lets me to get the stuff done without knowing huge amount of trivia a la set -e or behavior of ${VAR} with spaces.
There’s also an irrational personal disagreement with stuff that’s quirky/poorly designed. If I were in a situation where I really needed a low-keystroke shell scripting language, I am afraid I’d go for picking some un-POSIX lisp, or writing my own lang.
To be honest, I find the snippet vermaden posted to already have lots of unessessairy typing. There’s no reason to use a loop if all you want is map commands to line. Xargs exists for this exact reason.
But more to the point, what’s stopping you from calling Bourne shell whenever you need? Fish clearly states in its manual that it is not intended to be yet another scripting language, which in my opinion is an important and useful divide. I still write shellscripts all the time and have been a fish user for 10 years.
Now you are maintaining TWO configurations for interactive shells.
My shell is where I command my computer. If I need, or am encouraged, to leave my shell to command my computer, my shell has failed.
Can’t speak for julia, but here it is in raku:
Which is almost as concise as shell, and maintains more structure in its output.
Is this flame bait? If you get over the syntax hurdle, there are many reasons why
for
loops with globs, especially interactively, are better written in fish:Try typing this in your terminal, with newlines:
See? Who needs one-liners when you can have multiple in fish? This way, it stays easy to edit, read and navigate (2-dimensionally with the arrow keys) as you pile onto it.
In case your
*.log
expansion doesn’t match any file – compare this with any POSIX shell!As a command argument…
…and in a for loop:
As a command argument, fish prints an error, and doesn’t even run the command if the glob failed. In a for loop, the loop just iterates 0 times, with no error printed. In POSIX shelll, you can get either of these behaviours (by setting failglob or nullglob), but not both, which is a dilemma.
Recursive globs are on by default (not hidden behind the globstar flag) – who needs
find
anymore?Its not a troll attempt.
Its not my intention to force anyone to use ZSH or to discourage anyone from using FISH … besides I have made a syntax error and I can not edit my comment now :)
If should be either like that:
… or like that:
I really do use these POSIX for and while loops interactively all the time, not sure that this serves as a proof but:
I don’t see how this is different from standard shell:
This is how I usually write loops in scripts.
It isn’t different. The command line behaviour is.
Can you type such a multi-liner ↑ at the command line, edit it all at once (not just one line at a time, with no turning back, as in Bash), and retrieve it from history in its full glory?
I used zsh for a long time before switching to fish, and I basically found the interactive mode for fish to be a lot nicer—akin to zsh with oh-my-zsh, but even nicer, and faster. I absolutely switched for the interactive experience; most of my scripts are still written in bash for portability to my teammates.
The scripting changes make for a language that is a lot more internally consistent, so I have found that for my ad-hoc loops and stuff I do less googling to get it to work than I do using bash and zsh. Learning another shell syntax as idiosyncratic as bash would be very frustrating. At least with fish, it’s a very straightforward language.
If you are thinking about trying another shell, oil might be your jam, since it aims to be similar to POSIX syntax, but without as much ambiguity.
https://www.phoronix.com/
For lists I use orgzly with webdav sync. I have nginx serving a dav directory on a remote server I manage. To edit on my laptop I mount that dir with davfs and edit the notes with emacs org-mode.
For URLs, snippets of text, etc I want to transfer from my laptop clipboard to my phone clipboard I use:
xsel -o | qrencode -t ANSIUTF8
and then just scan that code with an app on my phone.For other random stuff I just use signal note to self.
Can anyone suggest a xscreensaver alternative that doesn’t pull a bunch of dependencies?
I mean, is this reasonable for everyone?
I use
i3lock
. Its direct dependencies look reasonable, although I don’t know what they recursively expand to.With that said, I don’t know whether it is “secure” or not because my threat model doesn’t really care if it is or not. I only use it to prevent cats and children from messing around on the keyboard. And for that, it works well.
Try slock, which has no dependencies except X11 itself.
Build from source and disable the savers/hacks that require the dependencies you aren’t happy about.
I don’t want any screensaver, just want my screen to lock reliably. I guess I’ll try that.
Try https://leahneukirchen.org/blog/archive/2020/01/x11-screen-locking-a-secure-and-modular-approach.html
It’s a great compromise when using X11, but the whole concept of screen savers on X11 is just so fragile. Actually suspending the session even if the screensaver should crash would be much cleaner (which is how every other platform, and also wayland handle it).
What I’m even more surprised about is that you said this compromise is possible with 25yo tech - why did no distro actually do any of this before?
What about physlock?
No idea about physlock or any other alternative, I am asking because this sentence kind of make me think:
Though this person’s attitude kind of bothers me, if you run
./configure
on xscreensaver you read stuff like:hm. Ok? I guess I don’t have to like it, I just don’t see the need for that.
jwz ragequit the software industry some 20 years ago and has been trolling the industry ever since. Just some context. He’s pretty funny but can be a bit of an ass at times 🤷
He’s also pretty reliably 100% correct about software. This may or may not correlate with the ragequitting.
While ragequitting may not correlate with being correct about software, being correct about software is absolutely no excuse for being an ass.
It’s not his job to put on a customer support demeanor while he says what he wants.
He gets to do as he likes. There are worse crimes than being an ass, such as being an ass to undeserving people perhaps. The configure script above is being an ass at the right people, even if it does editorialize (again, not a problem or crime, and really software could use attitudes!)
Lots of people in our industry seem to think that being a good developer you can behave like a 5 years old. That’s sad.
Especially in creative fields, you may choose to portray yourself any way you choose. You don’t owe anybody a pleasant attitude, unless of course you want to be pleasant to someone or everybody.
For some people, being pleasant takes a lot of work. I’m not paying those people, let alone to be pleasant, so why do I demand a specific attitude?
Being pleasant may take work, but being an asshole requires some effort too. Unless you are one to begin with and then it comes naturally of course. :D
How is the bc comment being an ass at the right people? Plenty of distros don’t ship with bc by default, you can just install it. What is a “standard part of unix” anyway?
bc is part of POSIX. Those distros are being POSIX-incompatible.
As a developer for Unix(-like) systems, you should be able to rely on POSIX tools (sh, awk, bc etc.) being installed.
It sounds like you view software as an occupation. It is not. It’s a product.
Physlock runs as root and locks the screen at the console level. AFAIK the problems affecting x-server screenlockers aren’t relevant to physlock.
Agree that CPU and disk (and maybe ram) haven’t improved enough to warrant a new laptop, but a 3200x1800 screen really is an amazing upgrade I don’t want to downgrade from.
I love my new 4k screen for text stuff.. Sadly on linux it seems to be pain in the ass to scale this appropriately and correctly. Even more with different resolutions between screens. So far windows does this quite well.
Wayland can handle it ok, but Xorg doesn’t (and never will) have support for per-display DPI scaling.
I don’t see myself being able to afford a 4k screen for a few years but if you just scale everything up, what’s the advantage?
The text looks much crisper, so you can use smaller font sizes without straining your eyes if you want more screen real estate. Or you can just enjoy the increased readability.
Note: YMMV. Some people love it and report significantly reduced eye strain and increased legibility, some people don’t really notice a difference.
I use a much nicer font on my terminals now, which I find clearer to read. And I stare at terminals, dunno, 50% of my days.
This is a Tuxedo laptop (I think it’s the same whitelabel as system86 sells) which don’t feel expensive to me.
Which tuxedo laptop has 4k?
I can’t find them anymore either. They used to have an option for the high res display. I go this one a bit over a year ago:
how was your driver experience ? I’ve had to re-send mine twice due to problems with the CPU/GPU hybrid stack. Though mine is now 3? years old.
Drivers are fine, it all simply works. Battery could last longer.
Yeah ok. I just ordered a Pulse 15. Also wanted a 4k display but didn’t see it anywhere. thanks
hah I’m also using a tuxedo one, but the font is far too tiny on that screen to work with everyday
well you have a much sharper font and can go nearer if you want (like with books). I get eye strain over time from how pixelated text can appear at evening to me. Also you can watch higher res videos and all in all it looks really crisp. See also you smartphone, mine is already using a 2k screen, and you can see how clean text etc is.
You may want to just get an 2k screen (and maybe 144 FPS?) as that may already be enough for you. I just took the gamble and wanted to test it. Note that I probably got a modell with an inferior background lighting, so it’s not the same around the edges when I’m less than 50CM away. I also took the IPS panel for superior viewing angle as I’m using it for movie watching also. YMMV
My RTX 2070 GPU can’t play games like destiny on 4k 60 FPS without 100% GPU usage and FPS drops the moment I’m more than walking around. So I’ll definitely have to buy a new one if I want to use that.
I also just got a new 4k monitor, and that’s bothering me also. It’s only a matter of time before I fix the glitch with a second 4k monitor… Maybe after Christmas
I ended up doing that. It sucks, but Linux is just plain bad at HiDPI in a way Windows/macOS is not. I found a mixed DPI environment to be essentially impossible.
This is where I’m at too. I’m not sure I could go back to a 1024x768 screen or even a 1440x900 screen even. I have a 1900x1200 xps 13 that I really enjoy which is hooked up to a 3440x1440p ultrawide.
Might not need all the CPU power, but the screens are so so nice!
And the speakers.
I love my x230, but I just bought an M1 Macbook Air, and god damn, are those speakers loud and crisp!
For me it’s also screen size and brightness that are important. I just can’t read the text on a small, dim screen.
Oh I’d love to have a 4k laptop. I’m currently using a 12” Xiaomi laptop from 2017 with 4GB of RAM and a 2k display. After adding a Samsung 960 evo NVMe and increasing Linux swappiness this is more than enough for my needs - but a 4k display would just be terrific!
Nice, I’ll try it, I didn’t know about unbound+rpz. How is the energized list working for you?
Really well.
add
rpz-log: yes
to eachrpz:
section to be able to keep count of how often the rpz action occur.Sorry, n00b to RPZ: how often does unbound refresh the list from github?
unbound treats the RPZ list or feed as if it were a real domain zone. So it will fetch it based on the TTL specified. Which is every 2 hours, based off looking at the top of the file.
Most of the non standard tools I use were already mentioned, but I recently discovered progress it’s useful for the times I forget to use pv or the files are bigger than I thought.
It’s not perfect, but it is the best alternative I used so far.
We use a GitOps model, our YAML (which is actually dhall) configuration is stored in a git repository, any changes in that repository trigger a pipeline that deploys the new configuration to our clusters (dev, staging, prod). It’s a bit more complicated than this, because there are helm charts involved and there is the CI/CD pipelines for each service running, but that is a summary of what happens.
This is not perfect at all, it has some problems, but sure beats running ansible on hundreds of EC2 machines and then managing monitoring, load balancing, and all that separately.
So now I am mostly curious what other people use instead of kubernetes, because as i said, it’s not perfect and I am always ready to try something better.