This is a very good candidate for AI services because we have a consistent (if not totally logical) structure to the document and we know what the output should look like
This is exactly why it’s a bad candidate for AI service. You need an OCR and a simple script to transform the table. You almost certainly want to avoid LLMs when you have a simple deterministic solution at hand.
it’s a PDF with structure and text, doesn’t even need OCR. From the fairly standard tools, pdftohtml sadly doesn’t manage to preserve the table structure, but pdftotext does a reasonably good job.
HomeAssistant is probably the biggest offender here, because I run it in Docker on a machine with several other applications. It actively resists this, popping up an “unsupported software detected” maintenance notification after every update. Can you imagine if Postfix whined in its logs if it detected that it had neighbors?
The author is assuming here that HomeAssistant is detecting the presence of other things running, but that’s one thing that containers prevent, unless you’ve explicitly punched holes between them. It sounds like an obnoxious notification, but also that the author doesn’t really understand why it’s happening.
Recently I decided to give NextCloud a try. This was long enough ago that the details elude me, but I think I burned around two hours trying to get the all-in-one Docker image to work in my environment. Finally I decided to give up and install it manually, to discover it was a plain old PHP application of the type I was regularly setting up in 2007. Is this a problem with kids these days? Do they not know how to fill in the config.php?
Was it recent or long enough ago? What was the actual problem? Nextcloud is being used as evidence of…. Something here. But what? And what’s wrong with putting a “plain old PHP application” in a container? They don’t mandate you use a container; you have the choice.
I like keeping PHP stuff isolated from my OS, and being able to upgrade apps and PHP versions for the apps independently. On my personal VPS roadmap is to move a media wiki into a container, precisely so I can decouple OS upgrades from both PHP and mediawiki (it’s currently installed from the Debian package)
OP installed HomeAssistant as a regular piece of software outside of docker and was surprised that it doesn’t like sharing the machine. It seem to point that HA is either very greedy or demands a container as it’s primarily deployment methods. And I agree with OP either is kinda unconventional installation strategy.
Other installation methods are meant for “experts”. I spent some time looking at it and decided it was too much trouble for me. I don’t really understand why they want that, either. If I wanted to understand, it looks like the right way to go about it would be to stand it up on their distribution and examine it very carefully. The reasoning was not clearly documented last time I looked.
I suspect that HA is really fragile, and makes many assumptions about its environment that causes it to fall over when even the tiniest thing is wrong. I suspect this because that’s been my experience even with HAOS.
Home Assistant is actually very robust, I ran it out of a “pip install home-assistant” venv for a few years and it was very healthy, before I moved it out to an appliance so the wall switches would still work whenever the server needed rebooting. Each time I upgraded the main package, it would go through and update any other dependencies needed for its integrations, with the occasional bump in Python version requiring a quick venv rebuild (all the config and data is separate).
Home Assistant wants to be on its own HassOS because of its user-friendly container image updates and its Addon ecosystem to enable companion services like the MQTT broker or Zwave and Zigbee adapters.
The second reason why the singularity is dumb (and one that ought to occur to a philosopher) is that the idea of a computer having “intelligence” and therefore being able to build a better computer to succeed it is absurd. Even if there were an AI with a high IQ, building a chip is a matter of empirical scientific engineering not a priori speculation. Without doing experiments, you can’t hope to make better chips. A computer in a box could puzz its puzzler all day long without coming up with any breakthroughs in fundamental science, and that’s what we need if the state of the art is to advance.
The point might be scaled back a bit, so that the claim is not that an AI will be able to build a better chip using a priori reasoning, but that the computer will be better able to lay out the circuit components than were produced using conventional empirical research. Which would be a good point, if we hadn’t already started doing that back in the 1970s. Ever since Intel’s second chip series, they’ve been designing their chips in CAD, since using a blueprint to lay out all the little bits was too bulky. In other words, computers are already doing what singularity junkies hope can someday be done. So, far from being prophets of the future, singularity enthusiasts are blinded to the past!
My experience using LLMs so far is that they are slightly positive on productivity, but only slightly because they also tend to send you down a lot of blind alleys. I’m sure it would be better for me if I paid for ChatGPT 4 or if I had a different personality type that didn’t mind not understanding things from first principles as much. It’s neat that all this LLM stuff is happening, and maybe at some point, they will figure out how to glue it together to an expert system to make something really intelligent, but even that won’t solve the a priori vs. a posterori problem mentioned in the talk.
Let’s assume we somehow come up with “an AI with high IQ” agent. Why do you think it would need to experiment? We do a lot of science in simulations already. Why wouldn’t AI be able to run simulations? We have bazillibytes of real experimental data for all branches of science. Don’t you think an AI wouldn’t be able to incorporate it? We do it all the time. We find all sorts of new applications for data that was collected for completely unrelated studies. You also point out that AI wouldn’t be able to improve chips much because we already do all out chip design in CAD. But we still use modular design that are not as optimised as integrated ones. We also use discrete schematics even though we’re in the realm of quantum effects for quite a while now. A couple of years ago a team used some ML to optimise circuits and it cam up with something that shouldn’t have worked in discrete schematics but IIRC used some induction effects and worked OK. There’s a plenty of space for novel approaches that can yield better performance. AI also can come up with a different solution for scaling its capabilities. What if instead it would go for a distributed model and hack all shitty IOT devises instead of manufacturing new chips?
Without doing experiments, you can’t hope to make better chips. A computer in a box could puzz its puzzler all day long without coming up with any breakthroughs in fundamental science, and that’s what we need if the state of the art is to advance.
Is there any reason to think that future AI implementations won’t be able to make experiments?
Or what’s more, is there any doubt that as soon as we have intelligent AI that is also capable of competently controlling a robot body, that it won’t immediately be put into thousands of them?
I’m sure it would be better for me if I paid for ChatGPT 4
Indeed, the difference between ChatGPT 3.5 and 4 is quite significant. I find 3.5 almost useless compared to 4.
To put things into perspective, at one point most people thought that a computer program would never be able to win a game of chess against the best human player. Later, when this belief was falsified, it shifted into the game of Go, which has now been falsified as well.
Similarly, most people thought a computer program would never be able to create a beautiful painting, or write poetry, or explain a joke, etc. Yet, these beliefs have also been falsified.
If you think we are more far away from having an AI that can competently control a robot body (data collection and GPU processing power and availability issues aside) than where we were 10 years ago from having an AI that can successfully explain a joke with a non-negligible margin of success, then I don’t know what to tell you…
In fact, as we speak, there are already self-driving cars and other autonomous robots commercially deployed in the real world, which can be argued to be robotic bodies (although, cars don’t have arms). And it’s true they have their limitations, but these limitations will only decrease over time while their autonomy will increase, if you believe we will keep making any progress at all, until the end of time.
Similarly, most people thought a computer program would never be able to create a beautiful painting, or write poetry, or explain a joke, etc. Yet, these beliefs have also been falsified.
No, they have not. Producing a statistical equivalent of a “beautiful painting” is not creation, it’s the synthesis of millions of inputs, painstakingly classified by humans, and fed into a model. The model is not creating anything, it is outputting results from a prompt.
In fact, as we speak, there are already self-driving cars and other autonomous robots commercially deployed in the real world
As far as I know, this is not true, but I am open to counterexamples.
these limitations will only decrease over time while their autonomy will increase
Computer games have not created images. Producing a statistical equivalent of an “image” is not creation, it’s the synthesis of millions of inputs (bits), painstakingly processed by CPUs, and fed into a computer program. The game is not creating anything, it is outputting results from its inputs.
Artists have not created beautiful paintings. Producing a statistical equivalent of a “beautiful painting” is not creation, it’s the synthesis of millions of inputs (e.g. what artists have seen and heard, i.e. their sensory inputs since they’ve been born), painstakingly classified by humans (i.e. word/label associations with sensory inputs), and fed into a model (i.e. their brain/nervous system). The model is not creating anything, it is outputting results from a prompt (e.g. the artist’s internal monologue, or their employer’s request).
As far as I know, this is not true, but I am open to counterexamples.
I had Waymo, autonomous delivery robots such as these, autonomous drones such as these and industrial robots in mind when I wrote that. Again, currently with limitations, including in their autonomy, but far from having reached the pinnacle of human technological progress for the rest of eternity.
This is in my opinion, an unfounded assumption.
An assumption? Sure, although arguably a reasonable one. I don’t understand why you consider technological progress an unfounded assumption, not just in general, but also specifically in the field of AI which has been making so much progress so quickly lately. And with its almost daily breakthroughs, it arguably shows no sign of stopping anytime soon if ever…
I feel like this could’ve been much simpler if there was a way to swap a view without animation. Then you need two states: animation start angles and animation end angles. After animation ends you can calculate new star and end angles modulo 360° and swap the view without animation.
Technically true but they’re pretty clear that they’re only using it for repo hosting. They don’t seem to plan to use any of the GH features that might lock them in. They can switch hosting at any time. So not exactly beholden. At the same time they’re outsourcing the most tedious parts (infra, backup, etc.).
we will not be accepting Pull Requests at this time
(My emphasis.) Considering that the vast majority of potential contributors know Git better than Mercurial, and GitHub is by a long margin the leading VCS hosting platform, the pressure from contributors to accept PRs into GitHub is probably going to increase from now on. I wouldn’t be surprised if within the year there were thousands of upvotes for various issues which boil down to “Why not let contributors submit PRs?” and “It would be cool if we could use this GitHub feature.” I’d give it three years before PRs are accepted on GitHub, and then another two years before more than 90% of changes are submitted via GitHub PRs.
I personally would have liked the project to use a different git forge, but to be fair, many, many Mozilla projects are already on GitHub. Just see how many repos there already are. Mozilla started using GitHub many, many years ago.
Over 2000 repos under the mozilla organization - not even counting mozilla-services, mozilla-mobile, mozlilasecurity.
But as someone working on the codebase, switching from Mercurial (hg) to git is a very welcome change.
Mozilla was already a heavy user of GitHub for things that weren’t the main Firefox tree. When I worked there (2011-2015) all the repositories I dealt with were on GitHub (though of course, being Mozilla, the bugs/issues were all tracked in Bugzilla).
For any type of project that depends on community contribution, GitHub and its network effects make it not even a choice, really; projects that stick to ideologically-pure hosting options suffer for it because of the vastly smaller number of potential contributors who will seek out or jump through the hoops to interact with them there.
We will continue to use Bugzilla, moz-phab, Phabricator, and Lando
Although we’ll be hosting the repository on GitHub, our contribution workflow
will remain unchanged and we will not be accepting Pull Requests at this time
The changes will still land in Phabricator, not GitHub.
Phabricator is a code review platform, not a repository hosting platform. This looks like the same flow that LLVM had for a while:
GitHub is the canonical repository.
Things are reviewed on Phabricator before being merged into the repo.
If you’re only using it for repo hosting, there’s very little lock in with GitHub. It’s just a server that hosts the canonical mirror of your repo. You can even set up GitHub actions that keep another mirror up to date with every push, so if GitHub becomes unfortunate then you just update the URLs of remotes and keep working.
If you’re using GitHub issues, PRs, and so on, then you end up with GitHub metadata in your commit history and that makes moving elsewhere harder. If a commit says ‘Fixes: #1234’, you need to have access to the GitHub issues thing to find out what that actually means.
I think it is very well known in the phab user community. My previous company used it too and we were all aware. I would be surprised if they aren’t aware
Y’all act like Mozilla didn’t have a conversation about this. I bet they got a few people on the team who understand the risks of the decision. This is, apparently, the choice they think is correct for the health of Firefox overall.
When an extremely well-known open source project decides to make a part of their process involve closed source infrastructure, that makes me doubt that the decision-makers truly understand the motivation of a lot of people who have been part of the history of the project.
Language has both significant whitespace and braces for code blocks.
Braces are also used for record literals.
The language has a single number type: DEC64, which appears to be an invention of the author. It doesn’t seem to be supported by any of the major CPU architectures because author provides implementations for x64, ARM64, and RISC-V64 (sic) of add operation taking 5 to 7 instructions and include branching.
1 / 0 is null
- is only an infix operator. To get a negative number you use neg function.
One advantage is that it can exactly represent integers up to 56 dits. Language provides fit module for dealign with that. It mostly provides bit operation.
Language makes use of Unicode. That is not just allows Unicode but requires Unicode for some parts.
Function is defined with ƒ (florin)
« and » (chevrons) are used for strings that do not have escape sequences in them (e.g. \n is literal two characters)
≠, ≤, ≥, ÷, and ≈ are all operators with not ASCII fallbacks.
At the same time /\ and \/ operators are slashes and not any of a similarly looking Unicode characters.
Patterns are defined with ¶ (pilcrow), ¬ (not) is used in the patterns
℠ is used for system messages in actors
Functions will fail when given too many arguments but are fine when given too few. The missing ones are null.
def is used for constant definition. Constant in the JS sense: the name can not be assigned a new value but the value can be mutated.
These are scoped to function and can not be used in if or loop.
There’s an invented concept of “stone” which is how immutable values are called in the language. And I don’t see how its different from “immutable” other than it’s a little shorter.
Variables are defined with var but are too scoped to the function and can not be used in if or loop.
Assignment is done with set operator: set x: 42.
It does all the usual stuff but also it can push and pop an element from an array.
The language doesn’t have Regular Expressions but provides Patterns. They fill the same need but have a different syntax. The motivation is that PCRE syntax is cryptic, which is fair.
There are some built-in character classes but no general Unicode property lookup like in many modern regexp implementations.
Language has actors as one of the core concepts
Actors look like objects with an attached message queue. System (runtime?) handles message delivery and actors have a single entry point for all messages.
Security section has some interesting-sounding propositions but it ends on a rather disappointing “The interfaces that are provided should practice Capability Discipline”. The section states that there are some some boundaries in the runtime that prevent some sorts of access (like pointer arithmetics in e.g. C or access by name like in JS, for instance) but doesn’t elaborate how these barriers prevent access to things that a malicious actor has a reference to. A few general strategies are proposed to improve isolation and as a consequence security but, as name suggests, it mostly relies on Discipline.
So my impression… is this is JS but worse in every way. I’m having hard time taking this seriously.
If you hate JS’s events you’ll hate actors, too. Messages are more isolated and better for parallelism but queues are as opaque. And on top of that you have to manage actors explicitly and deal with extra syntax for that.
Security proposition is weak souse. At least the way it’s described.
The syntax is, frankly speaking, explicitly designed to step on the most toes possible. It’s impossible to type on virtually all keyboards keyboard. It goes out of its way to not use syntax familiar to almost everyone whether they prefer C lineage or Algol/Simula/Pascal one (e.g. assignment). And if it didn’t do it for you it has significant whitespace for those that are used to block delimiters, and block delimiters to spite Python people.
Everyone loved the fact that JS had only one numeric type and that it was float. So the author brought it here too but instead of standard float supported by most hardware made it also less efficient.
I suspect this is a master-level trolling. I didn’t find any code implementing the language.
The other “interesting” bit is the inclusion of Design by Contract and pre- and post-conditions on functions. Which I would love to see more of, but the lack of any meat around what that looks like or how it works puts it in the same bucket as the rest of what you wrote up here.
Several times I’ve thought “NaN” is a strange thing - it’s introduced in IEEE 754 for obvious reasons, but when that standard is then transplanted into languages that already can represent “unkown/no-such-thing/failure”, it’s just another null type?
Like in SQL, the logic around most expressions that operate on Null is similar to NaN right, ternary logic
NaN is useful even in languages with not-a-thing types because it’s a specific set of domain errors. It can be useful to capture the difference between ‘a value was not provided’ and ‘an expression was provided but its result is not representable’ when handling errors. That said, languages with union types could often express NaNs better by making their floating point types explicitly a union of a valid number and one or more NaN types and use normal pattern matching to extract NaNs.
NaN in languages is a language problem, not IEEE 754 problem. NaN is an exception signalling mechanism. I think it’s nice we have it in hardware. But languages that provide an exception handling mechanism probably should not expose NaN.
For example, in Ruby 1/0 is an exception. And that’s the way it should be since exceptions are a first-class feature. That said, Float::INFINITY * 0 is NaN and that might’ve been handled better.
But C, for instance, has no exceptions so exposing NaN is probably as well as one can do. It’s possible to build checks that return more common error codes but NaN is about the same in terms of usability and is much faster so why bother?
decimal registers and operations used to be common, from what i understand
(and they’re in knuth’s books)
i wouldn’t mind seeing them. there’d be a performance/hardware cost, but floating point numbers have killed people, so it’s probably worth it in some contexts
The IEEE floating point standard includes decimal floating point. IBM makes PowerPC chips with decimal floating point hardware. The demand for this probably comes from banks and other companies that are dealing with money.
10 is a pretty awkward number. The only reason to enshrine it in a data format is to make it (somewhat) easier to format and parse human-readable numeric strings. But that’s not the most common use of numbers in a computer program! Performance is a lot more important than formatting.
Yeah, there are some well-known pitfalls with conversion of IEEE floats to/from human readable strings. They’re avoidable, and storing or transmitting floats as strings should be avoided too. I’ve done a lot of it, and dealing with floats in JSON is so effing awkward compared to just memcpy’ing a float in a binary format.
C# has a decimal type, which is 128 bits, and has base 10 semantics
(it’s implemented in software, using the usual base 2 instructions, of course)
and COBOL does base 10 math. at least part of the reason people still use it is because straightforward ports to java and such make the math wrong. not that java can’t do it, but it’s easier to make a mistake
i don’t think you want to use it everywhere. gamedev isn’t going to stop using 32 or even 16 bit floats any time soon, ML is going to keep trying to invent 4 bit floats, etc.
but they’re not the right tool for money. and there’s the famous patriot missile bug.
having a somewhat standard base 10 type would be nice, imo (in addition to floats)
I disagree on the 404 rule. You should absolutely use 404 for both for missing resources and URLs your app doesn’t recognise. Additional context can be provided in the response body which is presumably a structured error from the Rule 10. HTTP is the transport protocol. It has its semantics. Implementations should not willy-nilly change it. The API semantics should be limited to the payload. That is the response body and maybe non-standard headers. That’s what your client application should interpret. If you’re having hard time separating the two imaging using a completely different transport. Say, substitute HTTP with snail mail. How much of HTTP do you need to reinvent to make your API work?
I think this excerpt from Fielding’s original dissertation where he described REST is pretty great
HTTP is not designed to be a transport protocol. It is a transfer protocol in which the messages reflect the semantics of the Web architecture by performing actions on resources through the transfer and manipulation of representations of those resources. It is possible to achieve a wide range of functionality using this very simple interface, but following the interface is required in order for HTTP semantics to remain visible to intermediaries.
Not only is diverging from the expected semantics of HTTP unusual , but it also presumes that all intermediaries are going to agree with your new twist. Another commenter very smartly commented that “status codes are for clients you don’t control,” which should include all the layers which may or may not be present on the public internet; proxies, caches, etc…
Additional context can be provided in the response body which is presumably a structured error from the Rule 10.
By that logic, why do we have different error-codes at all? Just use http codes 2, 3 and 4 and put custom error texts.
No, I agree with the rule. Don’t use 404 for two different meanings. In particular, since this is not a rare edge case, it’s a common problem in almost any api, so having two codes for “request-path not understood” and “request-path understood, result is not there” is absolutely a good idea. Just have to find the best way to do that under the constrainted http codes that are available.
We have HTTP codes for HTTP clients. Specifically for HTTP clients that don’t speak your API like HTTP proxies. For them 404 is “I don’t have it” and they probably know how to interpret it. They don’t care whether the URL is handled or not by an app on the server. For what it’s worth it can be a bunch of files on a disk. And then it would definitely be the same. The endpoint not handled is identical to a missing file/directory. The missing resource is identical to a missing file/directory.
Now, your API client might want to distinguish between the two and your server might provide the information. Good, you should do that within your API. Which means in the response payload, not by inventing your own semantics for existing protocols.
For example, we all settled on 8 bit bytes but it’s just a convention. In the early days we had all sorts of byte lengths and even now we occasionally build very specialised CPUs with bytes other than 8 bit long. This doesn’t mean you can decide to use, say 7 bits everywhere. You can try but you’re going to have a hard time.
Same goes for HTTP. You might be looking for edge cases of using 410 instead of 404 a bit longer than trying to go with 7 bit bytes but this decision is definitely not consequence-free.
I’d say you do. HTTP Semantics (RFC 9110) specifies registry for status codes and says a registration MUST contain specific information. This is in contrast with headers which also have a registry but their registration is not a MUST and is more of an informative nature.
One of the things that the registry specifies is how the response should be handled. For example, 404 is cacheable unless headers say otherwise. 400 is not. And I’m mentioning 400 because clients that don’t understand the status should treat it as x00 from the same class. So 460 is going to look like 400 to compliant clients. And Bad Request is probably further from 404 than what you intended.
Another thing 9110 also says (15.5. Client Error 4xx):
Except when responding to a HEAD request, the server SHOULD send a representation containing an explanation of the error situation, and whether it is a temporary or permanent condition.
So basically what I said previously: 404 Not Found + compatible payload explaining whether the API endpoint is not there or the resource is missing.
Thanks, I guess you are right. It’s a bit frustrating. Recently I was looking for what code to use in case of an expired token. I ended up using 498 which is not registered, so I’m violating the spec. But what good is a spec that doesn’t even cover such basic cases… makes me frustrated.
I assume it’s some sort of access token (e.g. API access token). If so, it looks a lot like 401 Unauthorized.
The 401 (Unauthorized) status code indicates that the request has not been applied because it lacks valid authentication credentials for the target resource.
9110 also very clearly states what should happen next:
The server generating a 401 response MUST send a WWW-Authenticate header field […] containing at least one challenge applicable to the target resource.
For example, OAuth defines its own auth-scheme for this header. It also defines a few parameters that allow exposing some information about the nature of the error so your client might not need to parse the payload.
As a once-in-a-while casual python user requirements.txt has one clear advantage: it’s usefulness is one command away because pip is pre-installed with every python. With poetry I’d have to figure out how to get it installed before I can install the dependencies of whatever I want to actually run.
pipx is like pip, but it creates a virtual environment for the thing you’re installing, installs it there, and makes the binaries available somewhere sensible. It’s great for installing tools and applications and keeping it them isolated from each other.
I wonder whether running an end-to-end test suite before forking would warm the memory enough to show similar results with traditional forking app servers like unicorn. Shopify probably has enough data to tailor a warming test suite to cover most if not all code paths and not take forever.
But this would be quite impractical, as you’d very significantly slow down rollout of a new version, and fast deploys are really key in incident management among other things.
Maintaining such a “warmup” suite would also be quite a pain.
This is also speculative, but a test suite has lots of potential to cover code paths infrequently seen in production, plus there’s the resource use of the test code itself.
When you take this through to it’s logical conclusion you arrive at PGO. Because the best results likely come from measuring how often certain code paths are executed in production. We just need the developers of ruby to come up with a way to actually feed that data to the jit.
This looks interesting. Masked secrets and isolated environments look nice. Argument docs look nice, too but I suspect it only works for those custom commands right now. Show me how well Workflow Discovery works with git and then we’ll if it’s workable. Speaking of which, the whole demo doesn’t show any external commands executed. Integrating this with existing software might pose some challenges, I imagine.
I’m most intrigued how Undo will be implemented. I imagine it would require some sort of fs snapshotting to work with even basic rm. Or how it would work in a concurrent system (e.g. how do you undo cat test >> file after something else written to that file later?) I’m also not sure how this can be implemented for some operations. Say, how would one undo a firewall configuration? Or operations that interact with other machines: unsend an email, undo netcat scan, undo psql mydb < random.sql?
Considering that π generally is not used in any “correct” (la pi-does-not-exist) form in most calculations (not even at NASA), I do wonder how many curves would be needed to reach that “NASA approved circle”, 4 is already almost spot on, 8? 12?
If you use four Béziers, it’s off by 2.7e-4;
with eight Béziers, it’s off by 4.2e-6;
16, 6.6e-08;
32, 1.0e-09;
64, 1.6e-11;
128, 2.5e-13;
256, 4.4e-15.
So four isn’t enough to be pixel-perfect, but eight probably is pixel perfect up to a few thousand PPI.
I have a copy of the Postscript language reference manual from 1985, and it has an arcto operator for drawing proper circles. You can see it in the PLRM 3rd ed. on page 191.
It’s still an issue in PDF. It only supports straight line segments and cubic Bezier curves. PS is still around but it seem to be on the way out. For one, macOS removed a native feature to view PS files. And even before that PS were converted to PDF. This is an issue with PDF/E specifically conceived as a format for exchange of engineering data
I no longer have my copy of the reference manual but I believe the goal with PDF was to construct a minimal subset of PS functionality, to make it simpler to implement. from that mindset it made sense to take out circles, since they were also taking out most of the flow control to avoid having to deal with DoS issues
the original goal, I mean. today of course it has JavaScript support, clickable URLs, network features for advertising, and all manner of other crap. they haven’t really revisited the actual drawing operators in a long time though
That doesn’t mean it’s a perfect circle. I would strongly suspect it’s made of Bézier curves, just because those are the primitive shape PS uses. An arc isn’t going to be rendered as-is — it gets combined with any other path segments, then possibly stroked (which turns it into another shape, an outline of the stroked area) or filled.
Ah, abstractions. But what does the the right example abstract? According to the OOP: “low-level implementation details (e.g., how to heat the oven), intermediate-level functions (e.g., how to bake pizza), and high-level abstractions (e.g., preparing, baking, and boxing the pizza).” Domain-specific concepts all over the place so let’s go with that. Do we care about any of those concepts to abstract them away? That is, can we change/swap/remove/add any of the parts or is the whole thing is “atomic” from the domain’s perspective?
This is very much a matter of perspective. I personally start with the linear (left) type of code and abstract away things then there’s a need. I very much dislike any premature breaking code in pieces. I’m of the opinion that rules like “methods no longer than 5 lines” are thoroughly misguided and more often than not produce unreadable and extremely inefficient code.
The SI prefix kilo- denotes 1,000. Overloading it to mean 1,024 in some contexts is the ambiguity. Hence Kibi-, Megi- (? I’ve only seen these is geekily-themed software applications, so forgive me if I got this wrong), which are unambiguous. You can even use them outside the storage domain, if that’s your jam. Welcome to the 2 kibimeter race, 2048 meters!
Kilobyte was very unambiguously 1024 bytes some 15 years ago. It became ambiguous when ISO 80000 was published and hard drive marketing teams eagerly jumped on kilo’s “metricness”.
Sadly, that isn’t true. Storage typically used the power of two version because it was important for addressing. A 16- or 32-bit address can access an exact power of two number of 2^10 units but not of 10^3 units. For networking, this was not the case and so data transfer speeds over serial, parallel, and Ethernet links were always expressed in 10^3 units, because they needed to be integer multiples of the clock speed of the bus, which was expressed in decimals.
This led to a lot of confusion. If you have data coming in at 10 Mb/s, and you have 10 MB of storage, how long does it take to fill up? The answer ought to be 8 seconds with consistent units, but the Mb/s probably meant 10^6 bits, whereas the MB probably meant 2^20 bytes. But maybe they both used binary units.
Using kibibyte and friends makes it unambiguous: it’s definitely base-2. Unfortunately, enough people still use SI prefixes for binary things that it remains ambiguous.
It’s worth noting that the article is misleading in one respect. The Darwin utilities are now consistent in providing flags that let you explicitly select either base-2 or base-10 prefixes.
Looks like the beginning of the end for the unnecessarey e-waste provoked by companies forcing obselence and anti-consumer patterns made possible by the lack of regulations.
It’s amazing that no matter how good the news is about a regulation you’ll always be able to find someone to complain about how it harms some hypothetical innovation.
Sure. Possibly that too - although I’d be mildly surprised if the legislation actually delivers the intended upside, as opposed to just delivering unintended consequences.
And just to be clear: the unintended consequences here include the retardation of an industry that’s delivered us progress from 8 bit micros with 64KiB RAM to pervasive Internet and pocket supercomputers in one generation.
Edited to add: I run a refurbished W540 with Linux Mint as a “gaming” laptop, a refurbished T470s with FreeBSD as my daily driver, a refurbished Pixel 3 with Lineage as my phone, and a PineTime and Pine Buds Pro. I really do grok the issues with the industry around planned obsolescence, waste, and consumer hostility.
I just still don’t think the cost of regulation is worth it.
I’m a EU citizen, and I see this argument made every single time the EU passes a new legislation affecting tech. So far, those worries never materialized.
I just can’t see why having removeable batteries would hinder innovation. Each company will still want to sell their prducts, so they will be pressed to find creative ways to have a sleek design while meeting regulations.
Do you think Apple engineers are not capeable of designing AirPods that have a removeable battery? The battery is even in the stem, so it could be as simple as having the stem be de-tacheable. It was just simpler to super-glue everything shut, plus it comes with the benefit of forcing consumers to upgrade once their AirPods have unusable battery life.
Also, if I’m not mistaken it is about service-time replaceable battery, not “drop-on-the-floor-and-your-phone-is-in-6-parts” replaceable as in the old times.
In the specific case of batteries, yep, you’re right. The legislation actually carves special exception for batteries that’s even more manufacturer-friendly than other requirements – you can make devices with batteries that can only be replaced in a workshop environment or by a person with basic repair training, or even restrict access to batteries to authorised partners. But you have to meet some battery quality criteria and a plausible commercial reason for restricting battery replacement or access to batteries (e.g. an IP42 or, respectively, IP67 rating).
Yes, I know, what about the extra regulatory burden: said battery quality criteria are just industry-standard rating methods (remaining capacity after 500 and 1,000 cycles) which battery suppliers already provide, so manufacturers that currently apply the CE rating don’t actually need to do anything new to be compliant. In fact the vast majority of devices on the EU market are already compliant, if anyone isn’t they really got tricked by whoever’s selling them the batteries.
The only additional requirements set in place is that fasteners have to be resupplied or reusable. Most fasteners that also perform electrical functions are inherently reusable (on account of being metallic) so in practice that just means, if your batteries are fastened with adhesive, you have to provide that (or a compatible) adhesive for the prescribed duration. As long as you keep making devices with adhesive-fastened batteries that’s basically free.
i.e. none of this requires any innovation of any kind – in fact the vast majority of companies active on the EU market can keep on doing exactly what they’re doing now modulo exclusive supply contracts (which they can actually keep if they want to, but then they have to provide the parts to authorised repair partners).
Man do I ever miss those days though. Device not powering off the way I’m telling it to? Can’t figure out how to get this alarm app to stop making noise in this crowded room? Fine - rip the battery cover off and forcibly end the noise. 100% success rate.
You’re enjoying those ubiquitous “This site uses cookies” pop-ups, then?
Do you think Apple engineers are not capeable of designing AirPods that have a removeable battery?
Of course they’re capable, but there are always trade-offs. I am very skeptical that something as tiny and densely packed as an AirPod could be made with removeable parts without becoming a lot less durable or reliable, and/or more expensive. Do you have the hardware/manufacturing expertise to back up your assumptions?
I don’t know where the battery is in an AirPod, but I do know that lithium-polymer batteries can be molded into arbitrary shapes and are often designed to fill the space around the other components, which tends to make them difficult or impossible to remove.
You’re enjoying those ubiquitous “This site uses cookies” pop-ups, then?
Those aren’t required by law; those happen when a company makes customer-hostile decisions and wants to deflect the blame to the EU for forcing them to be transparent about their bad decisions.
Huh? Using cookies is “user-hostile”? I mean, I actually remember using the web before cookies were a thing, and that was pretty user-unfriendly: all state had to be kept in the URL, and if you hit the Back button it reversed any state, like what you had in your shopping cart.
I can’t believe so many years later people still believe the cookie law applies to all cookies.
Please educate yourself: the law explicitly applies only to cookies used for tracking and marketing purposes, not for funcional purposes.
The law also specified that the banner must have a single button to “reject all cookies”, so any website that ask you to go trought a complex flow to reject your consent is not compliant.
It requires consent for all but “strictly necessary” cookies. According to the definitions on that page, that covers a lot more than tracking and marketing. For example “ choices you have made in the past, like what language you prefer”, or “statistics cookies” whose “ sole purpose is to improve website function”. Definitely overreach.
FWIW this regulation doesn’t apply to the Airpods. But if for some reason it ever did, and based on the teardown here, the main obstacle for compliance is that the battery is behind a membrane that would need to be destroyed. A replaceable fastener that would allow it to be vertically extracted, for example, would allow for cheap compliance. If Apple got their shit together and got a waterproof rating, I think they could actually claim compliance without doing anything else – it looks like the battery is already replaceable in a workshop environment (someone’s done it here) and you can still do that.
(But do note that I’m basing this off pictures, I never had a pair of AirPods – frankly I never understood their appeal)
Sure, Apple is capable of doing it. And unlike my PinePhone the result would be a working phone ;)
But the issue isn’t a technical one. It’s the costs involved in finding those creative ways, to hiring people to ensure compliance, and especially to new entrants to the field.
It’s demonstrably untrue that the costs never materialise. Speak to business owners about the cost of regulatory compliance sometime. Red tape is expensive.
Is that a trick question? The alternative is not regulating, and it’s delivered absolutely stunning results so far. Again: airgapped 8 bit desk toys to pocket supercomputers with pervasive Internet in a generation.
Edited to add: and this isn’t a new problem they’re dealing with; Apple has been pulling various customer-hostile shit moves since Jobs’ influence outgrew Woz’s:
But once again, Steve Jobs objected, because he didn’t like the idea of customers mucking with the innards of their computer. He would also rather have them buy a new 512K Mac instead of them buying more RAM from a third-party.
Edited to add, again: I mean this without snark, coming from a country (Australia) that despite its larrikin reuptation is astoundingly fond of red tape, regulation, conformity, and conservatism. But I think there’s a reason Silicon Valley is in America, and not either Europe or Australasia, and it’s cultural as much as it’s economic.
It’s not just that. Lots of people have studied this and one of the key reasons is that the USA has a large set of people with disposable income that all speaks the same language. There was a huge amount of tech innovation in the UK in the ’80s and ’90s (contemporaries of Apple, Microsoft, and so on) but very few companies made it to international success because their US competitors could sell to a market (at least) five times the size before they needed to deal with export rules or localisation. Most of these companies either went under because US companies had larger economies of scale or were bought by US companies.
The EU has a larger middle class than the USA now, I believe, but they speak over a dozen languages and expect products to be translated into their own locales. A French company doesn’t have to deal with export regulations to sell in Germany, but they do need to make sure that they translate everything (including things like changing decimal separators). And then, if they want to sell in Spain, they need to do all of that again. This might change in the next decade, since LLM-driven machine translation is starting to be actually usable (helped for the EU by the fact that the EU Parliament proceedings are professionally translated into all member states’ languages, giving a fantastic training corpus).
The thing that should worry American Exceptionalists is that the middle class in China is now about as large as the population of America and they all read the same language. A Chinese company has a much bigger advantage than a US company in this regard. They can sell to at least twice as many people with disposable income without dealing with export rules or localisation than a US company.
That’s one of the reasons but it’s clearly not sufficient. Other countries have spent up on taxpayer’s purse and not spawned a silicon valley of their own.
But they failed basically because of the Economic Calculation Problem - even with good funding and smart people, they couldn’t manufacture worth a damn.
Money - wherever it comes from - is an obvious prerequisite. But it’s not sufficient - you need a (somewhat at least) free economy and a consequently functional manufacturing capacity. And a culture that rewards, not kills or jails, intellectual independence.
Silicon Valley was born through the intersection of several contributing factors, including a skilled science research base housed in area universities, plentiful venture capital, permissive government regulation, and steady U.S. Department of Defense spending.
Government spending tends to help with these kind of things. As it did for the foundations of the Internet itself. Attributing most of the progress we had so far to lack of regulation is… unwarranted at best.
Besides, it’s not like anyone is advocating we go back in time and regulate the industry to prevent current problems without current insight. We have specific problems now that we could easily regulate without imposing too much a cost on manufacturers: there’s a battery? It must be replaceable by the end user. Device pairing prevents third party repairs? Just ban it. Or maybe keep it, but provide the tools to re-pair any new component. They’re using proprietary connectors? Consider standardising it all to USB-C or similar. It’s a game of whack-a-mole, but at least this way we don’t over-regulate.
Beware comrade, folks will come here to make a slippery slope arguments about how requiring battery replacements & other minor guard rails towards consumer-forward, e-waste-reducing design will lead to the regulation of everything & fully stifle all technological progress.
What I’d be more concerned is how those cabals weaponize the legislation in their favor by setting and/or creating the standards. I look at how the EU is saying all these chat apps need to quit that proprietary, non-cross-chatter behavior. Instead of reverting their code to the XMPP of yore, which is controlled by a third-party committee/community, that many of their chats were were designed after, they want to create a new standard together & will likely find a way to hit the minimum legal requirements while still keeping a majority of their service within the garden or only allow other big corporate players to adapt/use their protocol with a 2000-page specification with bugs, inconsistencies, & unspecified behavior.
’s a game of whack-a-mole, but at least this way we don’t over-regulate.
Whack enough moles and over-regulation is exactly what you get - a smothering weight of decades of incremental regulation that no-one fully comprehends.
One of the reason the tech industry can move as fast as it does is that it hasn’t yet had the time to accumulate this - or the endless procession of grifting consultants and unions that burden other industries.
It isn’t exactly what you get. You’re not here complaining about the fact that your mobile phone electrocutes you or gives you RF burns of stops your TV reception - because you don’t realise that there is already lots of regulation from which you benefit. This is just a bit more, not the straw-man binary you’re making out it to be.
I am curious however: do you see the current situation as tenable? You mention above that there are anti-consumerist practices and the like, but also express concern that regulation will quickly slippery slope away, but I am curious if you think the current system where there is more and more lock in both on the web and in devices can be pried back from those parties?
The alternative is not regulating, and it’s delivered absolutely stunning results so far.
Why are those results stunning? Is there any reason to think that those improvements were difficult in the first place?
There are a lot of economic incentives, and it was a new field of science application, that has benefited from so many other fields exploding at the same time.
It’s definitely not enough to attribute those results to the lack of regulation. The “utility function” might have just been especially ripe for optimization in that specific local area, with or without regulations.
Now, we see monopolies appearing again and associated anti-consumer decisions to the benefit of the bigger players. This situation is well-known – tragedy of the common situations in markets is never fixed by the players themselves.
Your alternative of not doing anything hinges on the hope that your ideologically biased opinion won’t clash with reality. It’s naive to believe corporations not to attempt to maximize their profits when they have an opportunity.
Is that a trick question? The alternative is not regulating, and it’s delivered absolutely stunning results so far. Again: airgapped 8 bit desk toys to pocket supercomputers with pervasive Internet in a generation.
This did not happen without regulation. The FCC exists for instance. All of the actual technological development was funded by the government, if not conducted directly by government agencies.
As a customer, I react to this by never voluntarily buying Apple products. And I did buy a Framework laptop when it first became available, which I still use. Regulations that help entrench Apple and make it harder for new companies like Framework to get started are bad for me and what I care about with consumer technology (note that Framework started in the US, rather than the EU, and that in general Europeans immigrate to the US to start technology companies rather than Americans immigrating to the EU to do the same).
As a customer, I react to this by never voluntarily buying Apple products.
Which is reasonable. Earlier albertorestifo spoke about legislation “forc[ing] their hand” which is a fair summary - it’s the use of force instead of voluntary association.
(Although I’d argue that anti-circumvention laws, etc. prescribing what owners can’t do with their devices is equally wrong, and should also not be a thing).
The problem with voluntary association is that most people don’t know what they’re associating with when they buy a new product. Or they think short term, only to cry later when repairing their device is more expensive than buying a new one.
There’s a similar tension at play with GitHub’s rollout of mandatory 2FA: it really annoys me, adding TOTP didn’t improve my security by one iota (I already use KeepassXC), but many people do use insecure passwords, and you can’t tell by looking at their code. (In this analogy GitHub plays the role of the regulator.)
The problem with voluntary association is that most people don’t know what they’re associating with when they buy a new product.
I mean, you’re not wrong. But don’t you feel like the solution isn’t to infantilise people by treating them like they’re incapable of knowing?
For what it’s worth I fully support legislation enforcing “plain $LANGUAGE” contracts. Fraud is a species of violence; people should understand what they’re signing.
But by the same token, if people don’t care to research the repair costs of their devices before buying them … why is that a problem that requires legislation?
But don’t you feel like the solution isn’t to infantilise people by treating them like they’re incapable of knowing?
They’re not, if we give them access to the information, and there are alternatives. If all the major phone manufacturers produce locked down phones with impossible to swap components (pairing), that are supported only for 1 year, what’s people to do? If people have no idea how secure the authentication of someone is on GitHub, how can they make an informed decision about security?
But by the same token, if people don’t care to research the repair costs of their devices before buying them
When important stuff like that is prominently displayed on the package, it does influence purchase decisions. So people do care. But more importantly, a bad score on that front makes manufacturers look bad enough that they would quickly change course and sell stuff that’s easier to repair, effectively giving people more choice. So yeah, a bit of legislation is warranted in my opinion.
But the issue isn’t a technical one. It’s the costs involved in finding those creative ways, to hiring people to ensure compliance, and especially to new entrants to the field.
I’m not a business owner in this field but I did work at the engineering (and then product management, for my sins) end of it for years. I can tell you that, at least back in 2016, when I last did any kind of electronics design:
Ensuring “additional” compliance is often a one-time cost. As an EE, you’re supposed to know these things and keep up with them, you don’t come up with a schematic like they taught you in school twenty years ago and hand it over to a compliance consultant to make it deployable today. If there’s a major regulatory change you maybe have to hire a consultant once. More often than not you already have one or more compliance consultants on your payroll, who know their way around these regulations long before they’re ratified (there’s a long adoption process), so it doesn’t really involve huge costs. The additional compliance testing required in this bill is pretty slim and much of it is on the mechanical side. That is definitely not one-time but trivially self-certifiable, and much of the testing time will likely be cut by having some of it done on the supplier end (for displays, case materials etc.) – where this kind of testing is already done, on a much wider scale and with a lot more parameters, so most partners will likely cover it cost-free 12 months from now (and in the next couple of weeks if you hurry), and in the meantime, they’ll do it for a nominal “not in the statement of work” fee that, unless you’re just rebranding OEM products, is already present on a dozen other requirements, too.
An embarrassing proportion of my job consisted not of finding creative ways to fit a removable battery, but in finding creative ways to keep a fixed battery in place while still ensuring adequate cooling and the like, and then in finding even more creative ways to design (and figure out the technological flow, help write the servicing manual, and help estimate logistics for) a device that had to be both testable and impossible to take apart. Designing and manufacturing unrepairable, logistically-restricted devices is very expensive, too, it’s just easier for companies to hide its costs because the general public doesn’t really understand how electronics are manufactured and what you have to do to get them to a shop near them.
The intrinsic difficulty of coming up with a good design isn’t a major barrier of entry for new players any more than it is for anyone. Rather, most of them can’t materialise radically better designs because they don’t have access to good suppliers and good manufacturing facilities – they lack contacts, and established suppliers and manufacturers are squirrely about working with them because they aren’t going to waste time on companies that are here today and they’re gone tomorrow. When I worked on regulated designs (e.g. medical) that had long-term support demands, that actually oiled some squeaky doors on the supply side, as third-party suppliers are equally happy selling parts to manufacturers or authorised servicing partners.
Execs will throw their hands in the air and declare anything super-expensive, especially if it requires them to put managers to work. They aren’t always wrong but in this particular case IMHO they are. The additional design-time costs this bill imposes are trivial, and at least some of them can be offset by costs you save elsewhere on the manufacturing chain. Also, well-ran marketing and logistics departments can turn many of its extra requirements into real opportunities.
I don’t want any of these things more than I want improved waterproofing. Why should every EU citizen that has the same priorities I do not be able to buy a the device they want?
The law doesn’t prohibit waterproof devices. In fact, it makes clear expections for such cases. It mandates that the battery must be repleaceable without specialized tools and by any competent shop, it doesn’t mandate a user-replaceable battery.
And just to be clear: the unintended consequences here include the retardation of an industry that’s delivered us progress from 8 bit micros with 64KiB RAM to pervasive Internet and pocket supercomputers in one generation.
I don’t want to defend the bill (I’m skeptical of politicians making decisions on… just about anything, given how they operate) but I don’t think recourse to history is entirely justified in this case.
For one thing, good repairability and support for most of (if not throughout) a device’s useful lifetime was the norm for a good part of that period, and it wasn’t a hardware-only deal. Windows 3.1 was supported until 2001, almost twice longer than the bill demands. NT 3.1 was supported for seven years, and Windows 95 was supported for 6. IRIX versions were supported for 5 (or 7?) years, IIRC.
For another, the current state of affairs is the exact opposite of what deregulation was supposed to achieve, so I find it equally indefensible on (de)regulatory grounds alone. Manufacturers are increasingly convincing users to upgrade not by delivering better and more capable products, but by making them both less durable and harder to repair, and by restricting access to security updates. Instead of allowing businesses to focus on their customers’ needs rather than state-mandated demands, it’s allowing businesses to compensate their inability to meet customer expectations (in terms of device lifetime and justified update threshold) by delivering worse designs.
I’m not against that on principle but I’m also not a fan of footing the bill for all the extra waste collection effort and all the health hazards that generates. Private companies should be more than well aware that there’s no such thing as a free lunch.
For one thing, good repairability and support for most of (if not throughout) a device’s useful lifetime was the norm for a good part of that period
Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.
For another, the current state of affairs is the exact opposite of what deregulation was supposed to achieve
Deregulation is the “ground state”.
It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.
Conveniently, it fosters an environment of rapid growth in wealth, capability, and efficiency. Because when companies do what you’re suggesting - nerfing their products to drive revenue - customers go elsewhere.
Which is why you’ll see the greatest proponents of regulation are the companies themselves, these days. Anti-circumvention laws, censorship laws that are only workable by large companies, Government-mandated software (e.g. Korean banking, Android and iOS only identity apps in Australia) and so forth are regulation aimed against customers.
So there’s a part of me that thinks companies are reaping what they sowed, here. But two wrongs don’t make a right; the correct answer is to deregulate both ends.
Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.
Maybe. Most early home computers were expensive. People expected them to last a long time. In the late ’80s, most of the computers that friends of mine owned were several years old and lasted for years. The BBC Model B was introduced in 1981 and was still being sold in the early ‘90s. Schools were gradually phasing them out. Things like the Commodore 64 of Sinclair Spectrum had similar longevity. There were outliers but most of them were from companies that went out of business and so wouldn’t be affected by this kind of regulation.
It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.
That’s not really true. It assumes a balance of power that is exactly equal between companies and consumers.
Companies force people to upgrade by tying in services to the device and then dropping support in the services for older products. No one buys a phone because they want a shiny bit of plastic with a thinking rock inside, they buy a phone to be able to run programs that accomplish specific things. If you can’t safely connect the device to the Internet and it won’t run the latest apps (which are required to connect to specific services) because the OS is out of date, then they need to upgrade the OS. If they can’t upgrade the OS because the vendor doesn’t provide an upgrade and no one else can because they have locked down the bootloader (and / or not documented any of the device interfaces), then consumers have no choice to upgrade.
Conveniently, it fosters an environment of rapid growth in wealth, capability, and efficiency. Because when companies do what you’re suggesting - nerfing their products to drive revenue - customers go elsewhere.
Only if there’s another option. Apple controls their app store and so gets a 30% cut of app revenue. This gives them some incentive to support old devices, because they can still make money from them, but they will look carefully at the inflection point where they make more money from upgrades than from sales to older devices. For other vendors, Google makes money from the app store and they don’t[1] and so once a handset has shipped, the vendor has made as much money as they possibly can. If a vendor makes a phone that gets updates longer, then it will cost more. Customers don’t see that at point of sale, so they don’t buy it. I haven’t read the final version of this law, one of the drafts required labelling the support lifetime (which research has shown will have a big impact - it has a surprisingly large impact on purchasing decisions). By moving the baseline up for everyone, companies don’t lose out by being the one vendor to try to do better.
Economists have studied this kind of market failure for a long time and no one who actually does research in economics (i.e. making predictions and trying to falsify them, not going on talk shows) has seriously proposed deregulation as the solution for decades.
Economies are complex systems. Even Adam Smith didn’t think that a model with a complete lack of regulation would lead to the best outcomes.
[1] Some years ago, the Android security team was complaining about the difficulties of support across vendors. I suggested that Google could fix the incentives in their ecosystem by providing a 5% cut of all app sales to the handset maker, conditional on the phone running the latest version of Android. They didn’t want to do that because Google maximising revenue is more important than security for users.
Economists have studied this kind of market failure for a long time and no one who actually does research in economics (i.e. making predictions and trying to falsify them, not going on talk shows) has seriously proposed deregulation as the solution for decades.
Is the school of economics you’re talking about actual experimenters, or are they arm-chair philosophers? I trust they propose what you say they propose, but what actual evidence do they have?
I might sound like I’m dismissing an entire scientific discipline, but economics have shown strong signs of being extremely problematic on this front for a long time. One big red flag for instance is the existence of such long lived “schools”, which are a sign of dogma more than they’re a sign of sincere inquiry.
In fact, they dismiss the entire concept of market failure, because markets exist to provide pricing and a means of exchange, nothing more.
Assuming there’s no major misunderstanding, there’s another red flag right there: markets have a purpose now? Describing what markets do is one thing, but ascribing purpose to them presupposes some sentient entity put them there with intent. Which may very well be true, but then I would ask a historian, not an economist.
Now looking at the actual purpose… the second people exchange stuff for a price, there’s a pricing and a means of exchange. Those are the conditions for a market. Turning it around and making them the “purpose” of market is cheating: in effect, this is saying markets can’t fail by definition, which is quite unhelpful.
I might sound like I’m dismissing an entire scientific discipline, but economics have shown strong signs of being extremely problematic on this front for a long time.
This is why I specifically said practicing economists who make predictions. If you actually talk to people who do research in this area, you’ll find that they’re a very evidence-driven social science. The people at the top of the field are making falsifiable predictions based on models and refining their models when they’re wrong.
Economics is intrinsically linked to politics and philosophy. Economic models are like any other model: they predict what will happen if you change nothing or change something, so that you can see whether that fits with your desired outcomes. This is why it’s so often linked to politics and philosophy: Philosophy and politics define policy goals, economics lets you reason about whether particular actions (or inactions) will help you reach those goals. Mechanics is linked to engineering in the same way. Mechanics tells you whether a set of materials arranged in a particular way will be stable, engineering says ‘okay, we want to build a bridge’ and then uses models from mechanics to determine whether the bridge will fall down. In both cases, measurement errors or invalid assumptions can result in the goals not being met when the models say that they should be and in both cases these lead to refinements of the models.
One big red flag for instance is the existence of such long lived “schools”, which are a sign of dogma more than they’re a sign of sincere inquiry.
To people working in the field, the schools are just shorthand ways of describing a set of tools that you can use in various contexts.
Unfortunately, most of the time you hear about economics, it’s not from economists, it’s from people who play economists on TV. The likes of the Cato and Mises institutes in the article, for example, work exactly the wrong way around: they decide what policies they want to see applied and then try to tweak their models to justify those policies, rather than looking at what goals they want to see achieved and using the models to work out what policies will achieve those goals.
I really would recommend talking to economists, they tend to be very interesting people. And they hate the TV economists with a passion that I’ve rarely seen anywhere else.
Assuming there’s no major misunderstanding, there’s another red flag right there: markets have a purpose now?
Markets absolutely have a purpose. It is always a policy decision whether to allow a market to exist. Markets are a tool that you can use to optimise production to meet demand in various ways. You can avoid markets entirely in a planned economy (but please don’t, the Great Leap Forward or the early days of the USSR give you a good idea of how many people will die if you do). Something that starts as a market can end up not functioning as a market if there’s a significant power imbalance between producers and consumers.
Markets are one of the most effective tools that we have for optimising production for requirements. Precisely what they will optimise for depends a lot on the shape of the market and that’s something that you can control with regulation. The EU labelling rules on energy efficiency are a great example here. The EU mandated that white goods carry labels showing the score that they got on energy-efficiency tests. The labelling added information to customer and influenced their purchasing decisions. This created demand for more energy-efficient goods and the market responded by providing them. The eventually regulations banned goods below a certain efficiency rating but it was largely unnecessary because the market adjusted and most things were A rated or above when F ratings were introduced. It worked so well that they had to recalibrate the scale.
Unfortunately, most of the time you hear about economics, it’s not from economists, it’s from people who play economists on TV
I can see how such usurpation could distort my view.
Markets absolutely have a purpose. It is always a policy decision whether to allow a market to exist.
Well… yeah.
Precisely what [markets] will optimise for depends a lot on the shape of the market and that’s something that you can control with regulation. The EU labelling rules on energy efficiency are a great example here.
I love this example. Plainly shows that often people don’t make the choices they do because they don’t care about such and such criterion, they do so because they just can’t measure the criterion even if they cared. Even a Libertarian should admit that making good purchase decisions requires being well informed.
You can avoid markets entirely in a planned economy (but please don’t, the Great Leap Forward or the early days of the USSR give you a good idea of how many people will die if you do).
To be honest I do believe some select parts of the economy should be either centrally planned or have a state provider that can serve everyone: roads, trains, water, electricity, schools… Yet at the same time, other sectors probably benefit more from a Libertarian approach. My favourite example is the Internet: the fibre should be installed by public instances (town, county, state…), and bandwidth rented at a flat rate — no discount for bigger volumes. And then you just let private operators rent the bandwidth however they please, and compete among each other. The observed results in the few places in France that followed this plan (mostly rural areas big private providers didn’t want to invest in) was a myriad of operators of all sizes, including for-profit and non-profit ones (recalling what Benjamin Bayart said of the top of my head). This gave people an actual choice, and this diversity inherently makes this corner of the internet less controllable and freer.
A Libertarian market on top of a Communist infrastructure. I suspect we can find analogues in many other domains.
My favourite example is the Internet: the fibre should be installed by public instances (town, county, state…), and bandwidth rented at a flat rate — no discount for bigger volumes. And then you just let private operators rent the bandwidth however they please, and compete among each other.
This is great initially, but it’s not clear how you pay for upgrades. Presumably 1 Gb/s fibre is fine now, but at some point you’re going to want to migrate everyone to 10 Gb/s or faster, just as you wanted to upgrade from copper to fibre. That’s going to be capital investment. Does it come from general taxation or from revenue raised on the operators? If it’s the former, how do you ensure it’s equitable, if it’s the latter then you’re going to want to amortise the cost across a decade and so pricing sufficiently that you can both maintain the current infrastructure and save enough to upgrade to as-yet-unknown future technology can be tricky.
The problem with private ownership of utilities is that it encourages rent seeking and cutting costs at the expense of service and capital investment. The problem with public ownership is that it’s hard to incentivise efficiency improvements. It’s important to understand the failure modes of both options and ideally design hybrids that avoid the worst problems of both. The problem is that most politicians start with ‘privatisation is good’ or ‘privatisation is bad’ as an ideological view and not ‘good service, without discrimination, at an affordable price is good’ and then try to figure out how to achieve it.
Yes, that’s the point: the most capitalistic something is (extreme example: nuclear power plants), the more difficult private enterprises will want to invest in it, and if they do, the more they will want to extract rent from their investment. There’s also the thing about fibre (or copper) being naturally monopolistic, at least if you have a mind to conserve resources and not duplicate lines all over the place.
So there is a point where people must want the thing badly enough that the town/county/state does the investment itself. As it does for any public infrastructure.
Not saying this would be easy though. The difficulties you foresee are spot on.
The problem with public ownership is that it’s hard to incentivise efficiency improvements.
Ah, I see. Part of this can be solved by making sure the public part is stable, and the private part easy to invest on. For instance, we need boxes and transmitters and whatnot to lighten up the fibre. I speculate that those boxes are more liable to be improved than the fibre itself, so perhaps we could give them to private interests. But this is reaching the limits of my knowledge of the subject, I’m not informed enough to have an opinion on where the public/private frontier is best placed.
The problem is that most politicians start with ‘privatisation is good’ or ‘privatisation is bad’ as an ideological view and not ‘good service, without discrimination, at an affordable price is good’ and then try to figure out how to achieve it.
Yes, that’s the point: the most capitalistic something is (extreme example: nuclear power plants), the more difficult private enterprises will want to invest in it, and if they do, the more they will want to extract rent from their investment
There’s a lot of nuance here. Private enterprise is quite good at high-risk investments in general (nuclear power less so because it’s regulated such that you can’t just go bankrupt and walk away, for good reasons). A lot of interesting infrastructure were possible because private investors gambled and a lot of them lost a big pile of money. For example, the Iridium satellite phone network cost a lot to deliver and did not recoup costs. The initial investors lost money, but then the infrastructure was for sale at a bargain price and so it ended up being operated successfully. It’s not clear to me how public investment could have matched that (without just throwing away tax payers’ money).
This was the idea behind some of the public-private partnership things that the UK government pushed in the ‘90s (which often didn’t work, you can read a lot of detailed analyses of why not if you search for them): you allow the private sector to take the risk and they get a chunk of the rewards if the risk pays off but the public sector doesn’t lose out if the risk fails. For example, you get a private company to build a building that you will lease from them. They pay all of the costs. If you don’t need the building in five years time then it’s their responsibility to find another tenant. If the building needs unexpected repairs, they pay for them. If everything goes according to plan, you pay a bit more for the building space than if you’d built, owned, and operated it yourself. And you open it out to competitive bids, so if someone can deliver at a lower cost than you could, you save money.
Some procurement processes have added variations on this where the contract goes to the second lowest bidder or they the winner gets paid what the next-lowest bidder asked for. The former disincentivises stupidly low bids (if you’re lower than everyone else, you don’t win), the latter ensures that you get paid as much as someone else thought they could deliver, reducing risk to the buyer. There are a lot of variations on this that are differently effective and some economists have put a lot of effort into studying them. Their insights, sadly, are rarely used.
So there is a point where people must want the thing badly enough that the town/county/state does the investment itself. As it does for any public infrastructure.
The dangerous potholes throughout UK roads might warn you that this doesn’t always work.
A lot of interesting infrastructure were possible because private investors gambled and a lot of them lost a big pile of money.
Good point. We need to make sure that these gambles stay gambles, and not, say, save the people who made the bad choice. Save their company perhaps, but seize it in the process. We don’t want to share losses while keeping profits private — which is what happens more often than I’d like.
This was the idea behind some of the public-private partnership things that the UK government pushed in the ‘90s (which often didn’t work, you can read a lot of detailed analyses of why not if you search for them)
The intent is good indeed, and I do have an example of a failure in mind: water management in France. Much of it is under a private-public partnership, with Veolia I believe, and… well there are a lot of leaks, a crapton of water is wasted (up to 25% in some of the worst cases), and Veolia seems to be making little more than a token effort to fix the damn leaks. Probably because they don’t really pay for the loss.
The dangerous potholes throughout UK roads might warn you that this doesn’t always work.
It’s often a matter oh how much money you want to put in. Public French roads are quite good, even if we exclude the super highways (those are mostly privatised, and I reckon in even better shape). Still, point taken.
The EU labelling rules on energy efficiency are a great example here. The EU mandated that white goods carry labels showing the score that they got on energy-efficiency tests. The labelling added information to customer and influenced their purchasing decisions. This created demand for more energy-efficient goods and the market responded by providing them.
Were they actually successful, or did they only decrease operating energy use? You can make a device that uses less power because it lasts half as long before it breaks, but then you have to spend twice as much power manufacturing the things because they only last half as long.
I don’t disagree with your comment, by the way. Although, part of the problem with planned economies was that they just didn’t have the processing power to manage the entire economy; modern computers might make a significant difference, the only way to really find out would be to set up a Great Leap Forward in the 21st century.
Were they actually successful, or did they only decrease operating energy use?
I may be misunderstanding your question but energy ratings aren’t based on energy consumption across the device’s entire lifetime, they’re based on energy consumption over a cycle of operation of limited duration, or a set of cycles of operations of limited duration (e.g. a number of hours of functioning at peak luminance for displays, a washing-drying cycle for washer-driers etc.). You can’t get a better rating by making a device that lasts half as long.
Energy ratings and device lifetimes aren’t generally linked by any causal relation. There are studies that suggest the average lifetime for (at least some categories of) household appliances have been decreasing in the last decades, but they show about the same thing regardless of jurisdiction (i.e. even those without labeling or energy efficiency rules, or with different labeling rules) and it’s a trend that started prior to energy efficiency labeling legislation in the EU.
You can’t get a better rating by making a device that lasts half as long.
Not directly, but you can e.g. make moving parts lighter/thinner, so they take less power to move but break sooner as a result of them being thinner.
but they show about the same thing regardless of jurisdiction (i.e. even those without labeling or energy efficiency rules, or with different labeling rules) and it’s a trend that started prior to energy efficiency labeling legislation in the EU.
Not directly, but you can e.g. make moving parts lighter/thinner, so they take less power to move but break sooner as a result of them being thinner.
For household appliances, energy ratings are given based on performance under full rated capacities. Moving parts account for a tiny fraction of that in washing machines and washer-driers, and for a very small proportion of the total operating power in dishwashers and refrigerators (and obviously no proportion for electronic displays and lighting sources). They’re also given based on measurements of KWh/cycle rounded to three decimal places.
I’m not saying making some parts lighter doesn’t have an effect for some of the appliances that get energy ratings, but that effect is so close to the rounding error that I doubt anyone is going to risk their warranty figures for it. Lighter parts aren’t necessarily less durable, so if someone’s trying to get a desired rating by lightening the nominal load, they can usually get the same MTTF with slightly better materials, and they’ll gladly swallow some (often all) of the upfront cost just to avoid dealing with added uncertainty of warranty stocks.
Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.
The major problem with orphans was lack of access to proprietary parts – they were otherwise very repairable. The few manufacturers that can afford proprietary parts today (e.g. Apple) aren’t exactly at risk of going under, which is why that fear is all but gone today.
I have like half a dozen orphan boxes in my collection. Some of them were never sold on Western markets, I’m talking things like devices sold only on the Japanese market for a few years or Soviet ZX Spectrum clones. All of them are repairable even today, some of them even with original parts (except, of course, for the proprietary ones, which aren’t manufactured anymore so you can only get them from existing stocks, or use clone parts). It’s pretty ridiculous that I can repair thirty year-old hardware just fine but if my Macbook croaks, I’m good for a new one, and not because I don’t have (access to) equipment but because I can’t get the parts, and not because they’re not manufactured anymore but because no one will sell them to me.
It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.
Deregulation was certainly meant to achieve a lot of things in particular. Not just general outcomes, like a more competitive landscape and the like – every major piece of deregulatory legilslation has had concrete goals that it sought to achieve. Most of them actually achieved them in the short run – it was conserving these achievements that turned out to be more problematic.
As for companies not being able to force customers not to upgrade, repair or tinker with their devices, that is really not true. Companies absolutely can and do force customers to not upgrade or repair their devices. For example, they regularly use exclusive supply deals to ensure that customers can’t get the parts they need for it, which they can do without leveraging any government-mandated regulation.
Some of their means are regulation-based – e.g. they take them customers or third-parties to court (see e.g. Apple. For most devices, tinkering with them in unsupported ways is against the ToS, too, and while there’s always doubt on how much of that is legally enforceable in each jurisdiction out there, it still carries legal risk, in addition to the weight of force in jurisdictions where such provisions have actually been enforced.
This is very far from a state of minimal initiation of force. It’s a state of minimal initiation of force on the customer end, sure – customers have little financial power (both individually and in numbers, given how expensive organisation is), so in the absence of regulation they can leverage, they have no force to initiate. But companies have considerable resources of force at their disposal.
It’s not like there was heavy progress the last 10 years on smartphone hardware.
Since 2015 every smartphone is the same as the previous model, with a slightly better camera and a better chip. I don’t see how the regulation is making progress more difficult. IMHO it will drive innovation, phones will have to be made more durable.
And, for most consumers, the better camera is the only thing that they notice. An iPhone 8 is still massively overpowered for what a huge number of consumers need, and it was released five years ago. If anything, I think five years is far too short a time to demand support.
Until that user wants to play a mobile game–in which like PC hardware specs were propelled by gaming, so is the mobile market driven by games which I believe is now the most dominant gaming platform.
I don’t think the games are really that CPU / GPU intensive. It’s definitely the dominant gaming platform, but the best selling games are things like Candy Crush (which I admit to having spent far too much time playing). I just upgraded my 2015 iPad Pro and it was fine for all of the games that I tried from the app store (including the ones included with Netflix and a number of the top-ten ones). The only thing it struggled with was the Apple News app, which seems to want to preload vast numbers of articles and so ran out of memory (it had only 2 GiB - the iPhone version seems not to have this problem).
The iPhone 8 (five years old) has an SoC that’s two generations newer than my old iPad, has more than twice as much L2 cache, two high-performance cores that are faster than the two cores in mine (plus four energy-efficient cores, so games can have 100% use of the high-perf ones), and a much more powerful GPU (Apple in-house design replacing a licensed PowerVR one in my device). Anything that runs on my old iPad will barely warm up the CPU/GPU on an iPhone 8.
I don’t think the games are really that CPU / GPU intensive
But a lot are intensive & enthusiasts often prefer it. But still those time-waster types and e-sports tend to run on potatoes to grab the largest audience.
Anecdotally, I recently was reunited with my OnePlus 1 (2014) running Lineage OS, & it was choppy at just about everything (this was using the apps from when I last used it (2017) in airplane mode so not just contemporary bloat) especially loading map tiles on OSM. I tried Ubuntu Touch on it this year (2023) (listed as great support) & was still laggy enough that I’d prefer not to use it as it couldn’t handle maps well. But even if not performance bottle-necked, efficiency is certainly better (highly doubt it’d save more energy than the cost of just keeping an old device, but still).
My OnePlus 5T had an unfortunate encounter with a washing machine and tumble dryer, so now the cellular interface doesn’t work (everything else does). The 5T replaced a first-gen Moto G (which was working fine except that the external speaker didn’t work so I couldn’t hear it ring. I considered that a feature, but others disagreed). The Moto G was slow by the end. Drawing maps took a while, for example. The 5T was fine and I’d still be using it if I hadn’t thrown it in the wash. It has an 8-core CPU, 8 GiB of RAM, and an Adreno 540 GPU - that’s pretty good in comparison to the laptop that I was using until very recently.
I replaced the 5T with a 9 Pro. I honestly can’t tell the difference in performance for anything that I do. The 9 Pro is 4 years newer and doesn’t feel any faster for any of the apps or games that I run (and I used it a reasonable amount for work, with Teams, Word, and PowerPoint, which are not exactly light apps on any platform). Apparently the GPU is faster and the CPU has some faster cores but I rarely see anything that suggests that they’re heavily loaded.
Original comment mentioned iPhone 8 specifically. Android situation is completely different.
Apple had a significant performance lead for a while. Qualcomm just doesn’t seem to be interested in making high-end chips. They just keep promising that their next-year flagship will be almost as fast as Apple’s previous-year baseline. Additionally there are tons of budget Mediatek Androids that are awfully underpowered even when new.
Flagship Qualcomm chips for Android chips been fine for years & more than competitive once you factor in cost. I would doubt anyone is buying into either platform purely based on performance numbers anyhow versus ecosystem and/or wanting hardware options not offered by one or the other.
Those are some cherry-picked comparisons. Apple release on a different cadence. You check right now, & S23 beats up on it as do most flagship now. If you blur the timing, it’s all about the same.
With phones of the same tier released before & after you can see benchmarks are all close as is battery life. Features are wildly different tho since Android can offer a range of different hardware.
I think you’re really discounting the experiences of consumers to say they don’t notice the UI and UX changes made possible on the Android platform by improvements in hardware capabilities.
I notice that you’re not naming any. Elsewhere in the thread, I pointed out that I can’t tell the difference between a OnePlus 5T and a 9 Pro, in spite of them being years apart in releases. They can run the same version of Android and the UIs seem identical to me.
I didn’t think I had to. Android 9, 10, 11, 12 have distinct visual styles, and between vendors this distinction can further - this may be less apparent on OnePlus as they use their own OxygenOS (AOSP upstream ofc) (or at least, used to), but consumers notice even if they can’t clearly relate what they’ve noticed.
I’m using LimeageOS and both phones are running updated versions of the OS. Each version has made the settings app more awful but I can’t point to anything that’s a better UI or anything that requires newer hardware. Rendering the UI barely wakes up the GPU on the older phone. So what is new, better, and is enabled by newer hardware?
I can’t argue either way for “better”, I’m not the market. Newer hardware generally has better capability for graphics processing, leading to more reactive displays at higher refresh rates, and enabling compositing settings and features that otherwise wouldn’t run at an acceptable frame rate.
LineageOS is an AOSP build specifically designed to run fast and support legacy hardware, and is designed to look the same on all that hardware. It’s not a fair comparison to what people like to see with smartphone interfaces and launchers etc.
I can’t argue either way for “better”, I’m not the market. Newer hardware generally has better capability for graphics processing, leading to more reactive displays at higher refresh rates, and enabling compositing settings and features that otherwise wouldn’t run at an acceptable frame rate.
So please name one of them. A 2017 phone can happily run a 1080p display at a fast enough refresh that I’ve no idea what it is because it’s faster than my eyes can detect, with a full compositing UI. Mobile GPUs have been fast enough to composite every UI element from a separate texture, running complex pixel shaders on them, for ten years. OS X started doing this on laptops over 15 years ago, with integrated Intel graphics cards that are positively anaemic in comparison to anything in a vaguely recent phone. Android has provided a compositing UI toolkit from day one. Flutter, with its 60FPS default, runs very happily on a 2017 phone.
LineageOS is an AOSP build specifically designed to run fast and support legacy hardware, and is designed to look the same on all that hardware. It’s not a fair comparison to what people like to see with smartphone interfaces and launchers etc.
If it helps, I’m actually using the Microsoft launcher on both devices. But, again, you’re claiming that there are super magic UI features that are enabled by new hardware without saying what they are.
All innovation isn’t equal. Innovation that isn’t wanted by customers or their suppliers is malinvestment - a waste of human capacity, wealth, and time.
Innovation that isn’t wanted by customers or their suppliers is malinvestment - a waste of human capacity, wealth, and time.
What makes you think that this innovation is not wanted by customers?
There is innovation that is wanted by customers, but manufacturers don’t provide it because it goes against their interest. I think it’s a lie invisible-hand-believers tell themselves when claiming that customers have a choice between a fixable phone and a glued-phone with an appstore. Of course customers will chose the glued-phone with an app store, because they want a usable phone first. But this doesn’t mean they don’t want a fixable phone, it means that they were given a Hobson’s choice
but manufacturers don’t provide it because it goes against their interest.
The light-bulb cartel is the single worst example you could give; incandescent light-bulbs are dirt-cheap to replace and burning them hotter ends up improving the quality of their light (i.e. color) dramatically, while saving more in reduced power bills than they cost from shorter lifetimes. This 30min video by Technology Connections covers the point really well.
This cynical view is unwarranted in the case of EU, which so far is doing pretty well avoiding regulatory capture.
EU has a history of actually forcing companies to innovate in important areas that they themselves wouldn’t want to, like energy efficiency and ecological impact. And their regulations are generally set to start with realistic requirements, and are tightened gradually.
Not everything will sort itself out with consumers voting with their wallets. Sometimes degenerate behaviors (like vendor lock-in, planned obsolescence, DRM, spyware, bricking hardware when subscription for it expires) universally benefit companies, so all choices suck in one way or another. There are markets with high barriers to entry, especially in high-end electronics, and have rent-seeking incumbents that work for their shareholders’ interests, not consumers.
Ecodesign worked out wonderfully for vacuum cleaners, but that’s an appliance that hasn’t meaningfully changed since the 1930s. (You could argue that stick vacuum cleaners are different, but ecodesign certainly didn’t prevent them from entering the market)
The smartphone market has obviously been stagnating for a while, so it’ll be interesting to see if ecodesign can shake it up.
Ecodesign worked out wonderfully for vacuum cleaners, but that’s an appliance that hasn’t meaningfully changed since the 1930s
I strongly disagree here. They’ve changed massively since the ’90s. Walking around a vacuum cleaner shop in the ’90s, you had two choices of core designs. The vast majority had a bag that doubled as an air filter, pulling air through the bag and catching dust on the way. This is more or less the ’30s design (though those often had separate filters - there were quite a lot of refinements in the ’50s and ’60s - in the ’30s they were still selling ones that required a central compressor in the basement with pneumatic tubes that you plugged the vacuum cleaner into in each room).
Now, if you buy a vacuum cleaner, most of them use centrifugal airflow to precipitate heavy dust and hair, along with filters to catch the finer dust. Aside from the fact that both move air using electric motors, this is a totally different design to the ’30s models and to most of the early to mid ’90s models.
More recently, cheap and high-density lithium ion batteries have made cordless vacuums actually useful. These have been around since the ‘90s but they were pointless handheld things that barely functioned as a dustpan and brush replacement. Now they’re able to replace mains-powered ones for a lot of uses.
Oh, and that’s not even counting the various robot ones that can bounce around the floor unaided. These, ironically, are the ones whose vacuum-cleaner parts look the most like the ’30s design.
Just to add to that, the efficiency of most electrical home appliances has improved massively since the early ‘90s. With a few exceptions, like things based on resistive heating, which can’t improve much because of physics (but even some of those got replaced by devices with alternative heating methods) contemporary devices are a lot better in terms of energy efficiency. A lot of effort went into that, not only on the electrical end, but also on the mechanical end – vacuum cleaners today may look a lot like the ones in the 1930s but inside, from materials to filters, they’re very different. If you handed a contemporary vacuum cleaner to a service technician from the 1940s they wouldn’t know what to do with it.
Ironically enough, direct consumer demand has been a relatively modest driver of ecodesign, too – most consumers can’t and shouldn’t be expected to read power consumption graphs, the impact of one better device is spread across at least a two months’ worth of energy bills, and the impact of better electrical filtering trickles down onto consumers, so they’re not immediately aware of it. But they do know to look for energy classes or green markings or whatever.
But they do know to look for energy classes or green markings or whatever.
The eco labelling for white goods was one of the inspirations for this law because it’s worked amazingly well. When it was first introduced, most devices were in the B-C classification or worse. It turned out that these were a very good nudge for consumers and people were willing to pay noticeably more for higher-rated devices, to the point that it became impossible to sell anything with less than an A rating. They were forced to recalibrate the scheme a year or two ago because most things were A+ or A++ rated.
It turns out that markets work very well if customers have choice and sufficient information to make an informed choice. Once the labelling was in place, consumers were able to make an informed choice and there was an incentive for vendors to provide better quality on an axis that was now visible to consumers and so provided choice. The market did the rest.
Labeling works well when there’s a somewhat simple thing to measure to get the rating of each device - for a fridge it’s power consumption. It gets trickier when there’s no easy way to determine which of two devices is “better” - what would we measure to put a rating on a mobile phone or a computer?
I suppose the main problem is that such devices are multi-purpose - do I value battery life over FLOPS, screen brightness over resolution, etc. Perhaps there could be a multi-dimensional rating system (A for battery life, D for gaming performance, B for office work, …), but that gets unpractical very quickly.
There’s some research by Zinaida Benenson (I don’t have the publication to hand, I saw the pre-publication results) on an earlier proposal for this law that looked at adding two labels:
The number of years that the device would get security updates.
The maximum time between a vulnerability being disclosed and the device getting the update.
The proposal was that there would be statutory fines for devices that did not comply with the SLA outlined in those two labels but companies are free to put as much or as little as they wanted. Her research looked at this across a few consumer good classes and used the standard methodology where users were shown a small number of devices with different specs and different things on these labels and then asked to pick their preference. This was then used to vary price, features, and security SLA. I can’t remember the exact numbers but she found that users consistently were willing to select higher priced things with better security guarantees, and favoured them over some other features.
All the information I’ve read points to centrifugal filters not being meaningfully more efficient or effective than filter bags, which is why these centrifugal cylones are often backed up by traditional filters. Despite what James Dyson would have us believe, building vacuum cleaners is not like designing a Tokamak. I’d use them as an example of a meaningless change introduced to give consumers an incentive to upgrade devices that otherwise last decades.
Stick (cordless) vacuums are meaningfully different in that the key cleaning mechanism is no longer suction force. The rotating brush provides most of the cleaning action, coupled with a (relatively) weak suction provided by the cordless engines. This makes them vastly more energy-efficient, although this is probably cancelled out by the higher impact of production, and the wear and tear on the components.
It also might be a great opportunity for innovation in modular design. Say, Apple is always very proude when they come up with a new design. Remember a 15 min mini-doc on their processes when they introduced unibody macbooks? Or 10 min video bragging about their laminated screens?
I don’t see why it can’t be about how they designed a clever back cover that can be opened without tools to replace the battery and also waterproof. Or how they came up with a new super fancy screen glass that can survive 45 drops.
Depending on how you define “progress” there can be a plenty of opportunities to innovate. Moreover, with better repairability there are more opportunities for modding. Isn’t it a “progress” if you can replace one of the cameras on your iPhone Pro with, say, infrared camera? Definitely not a mainstream feature to ever come to mass-produced iPhone but maybe a useful feature for some professionals. With available schematics this might have a chance to actually come to market. There’s no chance for it to ever come to a glued solid rectangle that rejects any part but the very specific it came with from the factory.
That’s one way to think about it. Another is that shaping markets is one of the primary jobs of the government, and a representative government – which, for all its faults, the EU is – delegates this job to politics. And folks make a political decision on the balance of equities differently, and … well, they decide how the markets should look. I don’t think that “innovation” or “efficiency” at providing what the market currently provides is anything like a dispositive argument.
This overall shift will favor long term R&D investments of the kind placed before our last two decades of boom. It will improve innovation in the same way that making your kid eats vegetables improves their health. This is necessary soil for future booms.
Let’s take at its face value claim that numbered lists are needed to facilitate references to specific items in the list. That case is covered very well by current standards. One just have to properly use them. And it goes something like this:
Now, elsewhere, where you want to reference the items you need to properly link to them: <a class="list-reference" href="#item1">...</a>. And properly style the link:
This way you get generated item numbering consistent though out the document. Whatever you do to the list (add items, remove them, reorder) the document still remains consistent and you don’t have to edit every single reference whenever the list changes.
Now, the issue of copying the generated counters is still present. It can be rectified with a little bit of JS to inline the generated item numbering. It would use the very same API that you probably saw used in most annoying cases where a website adds some sort of attribution to the copied text.
I believe, this is why law was brought up at all: it uses these references a lot.
No list is above the law.
I concede though that law probably doesn’t fit into this solution neatly. The main problem with law is that numbering has to stay consistent not only across the document but also across editions. So that all other documents at all times could reference the same item in the law regardless of its edition.
I’m not sure what is the practice in the USA but in my country common practice is that the latest version of a particular law contains the latest edition of a changed item. Deleted items a replaced with a placeholder (e.g. “6.4.a has been removed by such and such other document”). And new items are added at the end of the corresponding level of nesting.
In this configuration it’s even possible to use the method proposed above.
The whole argument of semantics is moot.
If one doesn’t like HTML ol they can invent their own custom element and style it however they want. Replicating default list style would take maybe 20 lines of CSS and there’s not that much interactivity in it to worry much about accessibility issues that might come with custom select implementation. If you want to get really fancy, you can invent your own XML schema and style it with XSLT/CSS to your heart’s content. These are all ancient (in internet years) technologies that are well supported by all modern browsers, including mobile. Shortcoming of ordered lists in HTML are very likely pretty low on the (ordered) list of reasons for the downfall of civilization. And I suspect that this might an argument made not quite in good faith.
The OP doesn’t directly mention it but they gesture towards the idea insisting that the main function of numbering in lists in law is actually identification.
I agree and I believe it’s empirically true. Many laws came into effect many decades ago. Since then many amendments were made to them but they are still standing in their current edition. At the same time particular sections/paragraphs have been referenced in external documents be it court proceedings/rulings, other laws, or a massive amount of writing about laws ranging from books, to blog posts. We don’t want to invalidate older references with every amendment we make to the law. For one, this would make it very hard to make sense of old court rulings as one would have to find a specific law edition that was in effect at the time to for it to make sense. Stable identifiers for pieces of law seem like a good idea in the pre-computer time (which is right now in the realm of law).
This is exactly why it’s a bad candidate for AI service. You need an OCR and a simple script to transform the table. You almost certainly want to avoid LLMs when you have a simple deterministic solution at hand.
it’s a PDF with structure and text, doesn’t even need OCR. From the fairly standard tools, pdftohtml sadly doesn’t manage to preserve the table structure, but pdftotext does a reasonably good job.
Even more reasons to not involve LLMs.
The author is assuming here that HomeAssistant is detecting the presence of other things running, but that’s one thing that containers prevent, unless you’ve explicitly punched holes between them. It sounds like an obnoxious notification, but also that the author doesn’t really understand why it’s happening.
Was it recent or long enough ago? What was the actual problem? Nextcloud is being used as evidence of…. Something here. But what? And what’s wrong with putting a “plain old PHP application” in a container? They don’t mandate you use a container; you have the choice.
I like keeping PHP stuff isolated from my OS, and being able to upgrade apps and PHP versions for the apps independently. On my personal VPS roadmap is to move a media wiki into a container, precisely so I can decouple OS upgrades from both PHP and mediawiki (it’s currently installed from the Debian package)
OP installed HomeAssistant as a regular piece of software outside of docker and was surprised that it doesn’t like sharing the machine. It seem to point that HA is either very greedy or demands a container as it’s primarily deployment methods. And I agree with OP either is kinda unconventional installation strategy.
HA really, really wants to be installed on “Home Assistant OS”. Preferably on either a pi or one of their supported appliances:
https://www.home-assistant.io/installation/
Other installation methods are meant for “experts”. I spent some time looking at it and decided it was too much trouble for me. I don’t really understand why they want that, either. If I wanted to understand, it looks like the right way to go about it would be to stand it up on their distribution and examine it very carefully. The reasoning was not clearly documented last time I looked.
I suspect that HA is really fragile, and makes many assumptions about its environment that causes it to fall over when even the tiniest thing is wrong. I suspect this because that’s been my experience even with HAOS.
Home Assistant is actually very robust, I ran it out of a “pip install home-assistant” venv for a few years and it was very healthy, before I moved it out to an appliance so the wall switches would still work whenever the server needed rebooting. Each time I upgraded the main package, it would go through and update any other dependencies needed for its integrations, with the occasional bump in Python version requiring a quick venv rebuild (all the config and data is separate).
Home Assistant wants to be on its own HassOS because of its user-friendly container image updates and its Addon ecosystem to enable companion services like the MQTT broker or Zwave and Zigbee adapters.
Home Assistant works very poorly in general in my experience, even when you give it exclusive control over the whole machine.
I wrote this in 2010 and still believe it:
My experience using LLMs so far is that they are slightly positive on productivity, but only slightly because they also tend to send you down a lot of blind alleys. I’m sure it would be better for me if I paid for ChatGPT 4 or if I had a different personality type that didn’t mind not understanding things from first principles as much. It’s neat that all this LLM stuff is happening, and maybe at some point, they will figure out how to glue it together to an expert system to make something really intelligent, but even that won’t solve the a priori vs. a posterori problem mentioned in the talk.
I don’t find your argument too convincing.
Let’s assume we somehow come up with “an AI with high IQ” agent. Why do you think it would need to experiment? We do a lot of science in simulations already. Why wouldn’t AI be able to run simulations? We have bazillibytes of real experimental data for all branches of science. Don’t you think an AI wouldn’t be able to incorporate it? We do it all the time. We find all sorts of new applications for data that was collected for completely unrelated studies. You also point out that AI wouldn’t be able to improve chips much because we already do all out chip design in CAD. But we still use modular design that are not as optimised as integrated ones. We also use discrete schematics even though we’re in the realm of quantum effects for quite a while now. A couple of years ago a team used some ML to optimise circuits and it cam up with something that shouldn’t have worked in discrete schematics but IIRC used some induction effects and worked OK. There’s a plenty of space for novel approaches that can yield better performance. AI also can come up with a different solution for scaling its capabilities. What if instead it would go for a distributed model and hack all shitty IOT devises instead of manufacturing new chips?
Is there any reason to think that future AI implementations won’t be able to make experiments?
Or what’s more, is there any doubt that as soon as we have intelligent AI that is also capable of competently controlling a robot body, that it won’t immediately be put into thousands of them?
Indeed, the difference between ChatGPT 3.5 and 4 is quite significant. I find 3.5 almost useless compared to 4.
There is a substantial amount of doubt that this will ever happen.
To put things into perspective, at one point most people thought that a computer program would never be able to win a game of chess against the best human player. Later, when this belief was falsified, it shifted into the game of Go, which has now been falsified as well.
Similarly, most people thought a computer program would never be able to create a beautiful painting, or write poetry, or explain a joke, etc. Yet, these beliefs have also been falsified.
If you think we are more far away from having an AI that can competently control a robot body (data collection and GPU processing power and availability issues aside) than where we were 10 years ago from having an AI that can successfully explain a joke with a non-negligible margin of success, then I don’t know what to tell you…
In fact, as we speak, there are already self-driving cars and other autonomous robots commercially deployed in the real world, which can be argued to be robotic bodies (although, cars don’t have arms). And it’s true they have their limitations, but these limitations will only decrease over time while their autonomy will increase, if you believe we will keep making any progress at all, until the end of time.
No, they have not. Producing a statistical equivalent of a “beautiful painting” is not creation, it’s the synthesis of millions of inputs, painstakingly classified by humans, and fed into a model. The model is not creating anything, it is outputting results from a prompt.
As far as I know, this is not true, but I am open to counterexamples.
This is in my opinion, an unfounded assumption.
Computer games have not created images. Producing a statistical equivalent of an “image” is not creation, it’s the synthesis of millions of inputs (bits), painstakingly processed by CPUs, and fed into a computer program. The game is not creating anything, it is outputting results from its inputs.
Artists have not created beautiful paintings. Producing a statistical equivalent of a “beautiful painting” is not creation, it’s the synthesis of millions of inputs (e.g. what artists have seen and heard, i.e. their sensory inputs since they’ve been born), painstakingly classified by humans (i.e. word/label associations with sensory inputs), and fed into a model (i.e. their brain/nervous system). The model is not creating anything, it is outputting results from a prompt (e.g. the artist’s internal monologue, or their employer’s request).
I had Waymo, autonomous delivery robots such as these, autonomous drones such as these and industrial robots in mind when I wrote that. Again, currently with limitations, including in their autonomy, but far from having reached the pinnacle of human technological progress for the rest of eternity.
An assumption? Sure, although arguably a reasonable one. I don’t understand why you consider technological progress an unfounded assumption, not just in general, but also specifically in the field of AI which has been making so much progress so quickly lately. And with its almost daily breakthroughs, it arguably shows no sign of stopping anytime soon if ever…
I feel like this could’ve been much simpler if there was a way to swap a view without animation. Then you need two states: animation start angles and animation end angles. After animation ends you can calculate new star and end angles modulo 360° and swap the view without animation.
Hosting on GitHub, so now they will be beholden to another tech giant, and using a closed source platform. 😢
Technically true but they’re pretty clear that they’re only using it for repo hosting. They don’t seem to plan to use any of the GH features that might lock them in. They can switch hosting at any time. So not exactly beholden. At the same time they’re outsourcing the most tedious parts (infra, backup, etc.).
That’s also how Python started using Github, before migrating PRs, then issues.
(My emphasis.) Considering that the vast majority of potential contributors know Git better than Mercurial, and GitHub is by a long margin the leading VCS hosting platform, the pressure from contributors to accept PRs into GitHub is probably going to increase from now on. I wouldn’t be surprised if within the year there were thousands of upvotes for various issues which boil down to “Why not let contributors submit PRs?” and “It would be cool if we could use this GitHub feature.” I’d give it three years before PRs are accepted on GitHub, and then another two years before more than 90% of changes are submitted via GitHub PRs.
I personally would have liked the project to use a different git forge, but to be fair, many, many Mozilla projects are already on GitHub. Just see how many repos there already are. Mozilla started using GitHub many, many years ago. Over 2000 repos under the
mozilla
organization - not even counting mozilla-services, mozilla-mobile, mozlilasecurity.But as someone working on the codebase, switching from Mercurial (hg) to git is a very welcome change.
Mozilla was already a heavy user of GitHub for things that weren’t the main Firefox tree. When I worked there (2011-2015) all the repositories I dealt with were on GitHub (though of course, being Mozilla, the bugs/issues were all tracked in Bugzilla).
For any type of project that depends on community contribution, GitHub and its network effects make it not even a choice, really; projects that stick to ideologically-pure hosting options suffer for it because of the vastly smaller number of potential contributors who will seek out or jump through the hoops to interact with them there.
They’re pretty clear about using GH just as a mirror, not for development.
No, they’re pretty clear about “hosting the repository on GitHub”, with all changes landing there first.
How do you get that from:
The changes will still land in Phabricator, not GitHub.
Phabricator is a code review platform, not a repository hosting platform. This looks like the same flow that LLVM had for a while:
If you’re only using it for repo hosting, there’s very little lock in with GitHub. It’s just a server that hosts the canonical mirror of your repo. You can even set up GitHub actions that keep another mirror up to date with every push, so if GitHub becomes unfortunate then you just update the URLs of remotes and keep working.
If you’re using GitHub issues, PRs, and so on, then you end up with GitHub metadata in your commit history and that makes moving elsewhere harder. If a commit says ‘Fixes: #1234’, you need to have access to the GitHub issues thing to find out what that actually means.
And a group accustomed to Phabricator is not going to willingly switch to GitHub’s review tools, which are vastly inferior.
Isn’t that exactly the choice the LLVM project made?
As I recall a lot of people were unwilling, and understandably so.
The migration to GitHub pull request for LLVM is a total disaster. I have a summary at https://maskray.me/blog/2023-09-09-reflections-on-llvm-switch-to-github-pull-requests , though I try to use a less aggressive tone…
https://phacility.com/phabricator/
phorge is a maintained fork https://we.phorge.it/
Cool, but why then did they say Phabricator? Did they not migrate yet? Are they aware of the fork, or that the original program is unmaintained?
Presumably for the same reason we say “Twitter” - everyone knows what the old thing is, and the name doesn’t really matter.
I think it is very well known in the phab user community. My previous company used it too and we were all aware. I would be surprised if they aren’t aware
We upgraded the VyOS tracker to Phorge long ago but references to “Phabricator” are still everywhere. Not going away any time soon. :)
Y’all act like Mozilla didn’t have a conversation about this. I bet they got a few people on the team who understand the risks of the decision. This is, apparently, the choice they think is correct for the health of Firefox overall.
When an extremely well-known open source project decides to make a part of their process involve closed source infrastructure, that makes me doubt that the decision-makers truly understand the motivation of a lot of people who have been part of the history of the project.
At least this one allows comments.
Not in JSON, though.
From a cursory look through the doc:
1 / 0
isnull
-
is only an infix operator. To get a negative number you useneg
function.fit
module for dealign with that. It mostly provides bit operation./\
and\/
operators are slashes and not any of a similarly looking Unicode characters.null
.def
is used for constant definition. Constant in the JS sense: the name can not be assigned a new value but the value can be mutated.if
orloop
.var
but are too scoped to the function and can not be used inif
orloop
.set
operator:set x: 42
.So my impression… is this is JS but worse in every way. I’m having hard time taking this seriously.
If you hate JS’s events you’ll hate actors, too. Messages are more isolated and better for parallelism but queues are as opaque. And on top of that you have to manage actors explicitly and deal with extra syntax for that.
Security proposition is weak souse. At least the way it’s described.
The syntax is, frankly speaking, explicitly designed to step on the most toes possible. It’s impossible to type on virtually all keyboards keyboard. It goes out of its way to not use syntax familiar to almost everyone whether they prefer C lineage or Algol/Simula/Pascal one (e.g. assignment). And if it didn’t do it for you it has significant whitespace for those that are used to block delimiters, and block delimiters to spite Python people.
Everyone loved the fact that JS had only one numeric type and that it was float. So the author brought it here too but instead of standard float supported by most hardware made it also less efficient.
I suspect this is a master-level trolling. I didn’t find any code implementing the language.
The other “interesting” bit is the inclusion of Design by Contract and pre- and post-conditions on functions. Which I would love to see more of, but the lack of any meat around what that looks like or how it works puts it in the same bucket as the rest of what you wrote up here.
It seems Crockfords intent with DEC64 is that it eventually be implemented in hardware? https://www.crockford.com/dec64.html
Several times I’ve thought “NaN” is a strange thing - it’s introduced in IEEE 754 for obvious reasons, but when that standard is then transplanted into languages that already can represent “unkown/no-such-thing/failure”, it’s just another null type?
Like in SQL, the logic around most expressions that operate on Null is similar to NaN right, ternary logic
NaN is useful even in languages with not-a-thing types because it’s a specific set of domain errors. It can be useful to capture the difference between ‘a value was not provided’ and ‘an expression was provided but its result is not representable’ when handling errors. That said, languages with union types could often express NaNs better by making their floating point types explicitly a union of a valid number and one or more NaN types and use normal pattern matching to extract NaNs.
NaN in languages is a language problem, not IEEE 754 problem. NaN is an exception signalling mechanism. I think it’s nice we have it in hardware. But languages that provide an exception handling mechanism probably should not expose NaN.
For example, in Ruby
1/0
is an exception. And that’s the way it should be since exceptions are a first-class feature. That said,Float::INFINITY * 0
isNaN
and that might’ve been handled better.But C, for instance, has no exceptions so exposing NaN is probably as well as one can do. It’s possible to build checks that return more common error codes but NaN is about the same in terms of usability and is much faster so why bother?
decimal registers and operations used to be common, from what i understand (and they’re in knuth’s books)
i wouldn’t mind seeing them. there’d be a performance/hardware cost, but floating point numbers have killed people, so it’s probably worth it in some contexts
The IEEE floating point standard includes decimal floating point. IBM makes PowerPC chips with decimal floating point hardware. The demand for this probably comes from banks and other companies that are dealing with money.
10 is a pretty awkward number. The only reason to enshrine it in a data format is to make it (somewhat) easier to format and parse human-readable numeric strings. But that’s not the most common use of numbers in a computer program! Performance is a lot more important than formatting.
Yeah, there are some well-known pitfalls with conversion of IEEE floats to/from human readable strings. They’re avoidable, and storing or transmitting floats as strings should be avoided too. I’ve done a lot of it, and dealing with floats in JSON is so effing awkward compared to just memcpy’ing a float in a binary format.
C# has a decimal type, which is 128 bits, and has base 10 semantics (it’s implemented in software, using the usual base 2 instructions, of course)
and COBOL does base 10 math. at least part of the reason people still use it is because straightforward ports to java and such make the math wrong. not that java can’t do it, but it’s easier to make a mistake
i don’t think you want to use it everywhere. gamedev isn’t going to stop using 32 or even 16 bit floats any time soon, ML is going to keep trying to invent 4 bit floats, etc.
but they’re not the right tool for money. and there’s the famous patriot missile bug. having a somewhat standard base 10 type would be nice, imo (in addition to floats)
I disagree on the 404 rule. You should absolutely use 404 for both for missing resources and URLs your app doesn’t recognise. Additional context can be provided in the response body which is presumably a structured error from the Rule 10. HTTP is the transport protocol. It has its semantics. Implementations should not willy-nilly change it. The API semantics should be limited to the payload. That is the response body and maybe non-standard headers. That’s what your client application should interpret. If you’re having hard time separating the two imaging using a completely different transport. Say, substitute HTTP with snail mail. How much of HTTP do you need to reinvent to make your API work?
I think this excerpt from Fielding’s original dissertation where he described REST is pretty great
Not only is diverging from the expected semantics of HTTP unusual , but it also presumes that all intermediaries are going to agree with your new twist. Another commenter very smartly commented that “status codes are for clients you don’t control,” which should include all the layers which may or may not be present on the public internet; proxies, caches, etc…
By that logic, why do we have different error-codes at all? Just use http codes 2, 3 and 4 and put custom error texts.
No, I agree with the rule. Don’t use 404 for two different meanings. In particular, since this is not a rare edge case, it’s a common problem in almost any api, so having two codes for “request-path not understood” and “request-path understood, result is not there” is absolutely a good idea. Just have to find the best way to do that under the constrainted http codes that are available.
We have HTTP codes for HTTP clients. Specifically for HTTP clients that don’t speak your API like HTTP proxies. For them 404 is “I don’t have it” and they probably know how to interpret it. They don’t care whether the URL is handled or not by an app on the server. For what it’s worth it can be a bunch of files on a disk. And then it would definitely be the same. The endpoint not handled is identical to a missing file/directory. The missing resource is identical to a missing file/directory.
Now, your API client might want to distinguish between the two and your server might provide the information. Good, you should do that within your API. Which means in the response payload, not by inventing your own semantics for existing protocols.
For example, we all settled on 8 bit bytes but it’s just a convention. In the early days we had all sorts of byte lengths and even now we occasionally build very specialised CPUs with bytes other than 8 bit long. This doesn’t mean you can decide to use, say 7 bits everywhere. You can try but you’re going to have a hard time.
Same goes for HTTP. You might be looking for edge cases of using 410 instead of 404 a bit longer than trying to go with 7 bit bytes but this decision is definitely not consequence-free.
So then, let’s assume I use 460 as error code.
I’d say you do. HTTP Semantics (RFC 9110) specifies registry for status codes and says a registration MUST contain specific information. This is in contrast with headers which also have a registry but their registration is not a MUST and is more of an informative nature.
One of the things that the registry specifies is how the response should be handled. For example, 404 is cacheable unless headers say otherwise. 400 is not. And I’m mentioning 400 because clients that don’t understand the status should treat it as x00 from the same class. So 460 is going to look like 400 to compliant clients. And Bad Request is probably further from 404 than what you intended.
Another thing 9110 also says (15.5. Client Error 4xx):
So basically what I said previously: 404 Not Found + compatible payload explaining whether the API endpoint is not there or the resource is missing.
Thanks, I guess you are right. It’s a bit frustrating. Recently I was looking for what code to use in case of an expired token. I ended up using 498 which is not registered, so I’m violating the spec. But what good is a spec that doesn’t even cover such basic cases… makes me frustrated.
I assume it’s some sort of access token (e.g. API access token). If so, it looks a lot like 401 Unauthorized.
9110 also very clearly states what should happen next:
For example, OAuth defines its own auth-scheme for this header. It also defines a few parameters that allow exposing some information about the nature of the error so your client might not need to parse the payload.
But if you’re using something else, you can use other schemes or invent your own like AWS did.
As a once-in-a-while casual python user requirements.txt has one clear advantage: it’s usefulness is one command away because pip is pre-installed with every python. With poetry I’d have to figure out how to get it installed before I can install the dependencies of whatever I want to actually run.
Wait, what’s pipx? Do I need yet another package manager to install poetry? Why can’t plain pip do it?
pipx
is a way to install and manage Python applications. It handles your PATH, executables, virtual environments, et al.pipx
is likepip
, but it creates a virtual environment for the thing you’re installing, installs it there, and makes the binaries available somewhere sensible. It’s great for installing tools and applications and keeping it them isolated from each other.pip install .
I wonder whether running an end-to-end test suite before forking would warm the memory enough to show similar results with traditional forking app servers like unicorn. Shopify probably has enough data to tailor a warming test suite to cover most if not all code paths and not take forever.
Eric Wong suggested something similar when I submitted the first version of reforking upstream: https://yhbt.net/unicorn-public/aecd9142-94cf-b195-34f3-bea4870ed9c8@shopify.com/T/#m30f8059af3b0d7f2dbcbbca12aff261b0cb9e1fb
But this would be quite impractical, as you’d very significantly slow down rollout of a new version, and fast deploys are really key in incident management among other things.
Maintaining such a “warmup” suite would also be quite a pain.
This is also speculative, but a test suite has lots of potential to cover code paths infrequently seen in production, plus there’s the resource use of the test code itself.
When you take this through to it’s logical conclusion you arrive at PGO. Because the best results likely come from measuring how often certain code paths are executed in production. We just need the developers of ruby to come up with a way to actually feed that data to the jit.
PGO is Profile Guided Optimization for folks like me who didn’t know.
If you feel like your CSS is too readable this seem like a great solution.
This looks interesting. Masked secrets and isolated environments look nice. Argument docs look nice, too but I suspect it only works for those custom commands right now. Show me how well Workflow Discovery works with git and then we’ll if it’s workable. Speaking of which, the whole demo doesn’t show any external commands executed. Integrating this with existing software might pose some challenges, I imagine.
I’m most intrigued how Undo will be implemented. I imagine it would require some sort of fs snapshotting to work with even basic
rm
. Or how it would work in a concurrent system (e.g. how do you undocat test >> file
after something else written to that file later?) I’m also not sure how this can be implemented for some operations. Say, how would one undo a firewall configuration? Or operations that interact with other machines: unsend an email, undo netcat scan, undopsql mydb < random.sql
?Considering that π generally is not used in any “correct” (la pi-does-not-exist) form in most calculations (not even at NASA), I do wonder how many curves would be needed to reach that “NASA approved circle”, 4 is already almost spot on, 8? 12?
If you use four Béziers, it’s off by 2.7e-4;
with eight Béziers, it’s off by 4.2e-6;
16, 6.6e-08;
32, 1.0e-09;
64, 1.6e-11;
128, 2.5e-13;
256, 4.4e-15.
So four isn’t enough to be pixel-perfect, but eight probably is pixel perfect up to a few thousand PPI.
What definition/representation of a circle would you use to test this?
Check that the distance between points on the Bézier curve and the centre of the circle matches the radius of the circle to the required precision.
I have a copy of the Postscript language reference manual from 1985, and it has an
arcto
operator for drawing proper circles. You can see it in the PLRM 3rd ed. on page 191.It’s still an issue in PDF. It only supports straight line segments and cubic Bezier curves. PS is still around but it seem to be on the way out. For one, macOS removed a native feature to view PS files. And even before that PS were converted to PDF. This is an issue with PDF/E specifically conceived as a format for exchange of engineering data
Huh, that’s a surprising omission since most other vector graphics implementations have arcs. (eg SVG, web canvas - tho they postdate PDF)
I no longer have my copy of the reference manual but I believe the goal with PDF was to construct a minimal subset of PS functionality, to make it simpler to implement. from that mindset it made sense to take out circles, since they were also taking out most of the flow control to avoid having to deal with DoS issues
the original goal, I mean. today of course it has JavaScript support, clickable URLs, network features for advertising, and all manner of other crap. they haven’t really revisited the actual drawing operators in a long time though
That doesn’t mean it’s a perfect circle. I would strongly suspect it’s made of Bézier curves, just because those are the primitive shape PS uses. An arc isn’t going to be rendered as-is — it gets combined with any other path segments, then possibly stroked (which turns it into another shape, an outline of the stroked area) or filled.
PDF is the same rendering model as PS, basically.
Ah, abstractions. But what does the the right example abstract? According to the OOP: “low-level implementation details (e.g., how to heat the oven), intermediate-level functions (e.g., how to bake pizza), and high-level abstractions (e.g., preparing, baking, and boxing the pizza).” Domain-specific concepts all over the place so let’s go with that. Do we care about any of those concepts to abstract them away? That is, can we change/swap/remove/add any of the parts or is the whole thing is “atomic” from the domain’s perspective?
This is very much a matter of perspective. I personally start with the linear (left) type of code and abstract away things then there’s a need. I very much dislike any premature breaking code in pieces. I’m of the opinion that rules like “methods no longer than 5 lines” are thoroughly misguided and more often than not produce unreadable and extremely inefficient code.
One thing interesting to see would be how well the image gets compressed compared to image-specific formats or even between different audio codec.
Kilobyte is 1024 bytes if you’re old or buying disk space. It’s 1000 bytes if you’re young or selling disk space.
And Megabyte is 1,024,000 bytes if you have a floppy disk.
The SI prefix kilo- denotes 1,000. Overloading it to mean 1,024 in some contexts is the ambiguity. Hence Kibi-, Megi- (? I’ve only seen these is geekily-themed software applications, so forgive me if I got this wrong), which are unambiguous. You can even use them outside the storage domain, if that’s your jam. Welcome to the 2 kibimeter race, 2048 meters!
Kilobyte was very unambiguously 1024 bytes some 15 years ago. It became ambiguous when ISO 80000 was published and hard drive marketing teams eagerly jumped on kilo’s “metricness”.
Sadly, that isn’t true. Storage typically used the power of two version because it was important for addressing. A 16- or 32-bit address can access an exact power of two number of 2^10 units but not of 10^3 units. For networking, this was not the case and so data transfer speeds over serial, parallel, and Ethernet links were always expressed in 10^3 units, because they needed to be integer multiples of the clock speed of the bus, which was expressed in decimals.
This led to a lot of confusion. If you have data coming in at 10 Mb/s, and you have 10 MB of storage, how long does it take to fill up? The answer ought to be 8 seconds with consistent units, but the Mb/s probably meant 10^6 bits, whereas the MB probably meant 2^20 bytes. But maybe they both used binary units.
Using kibibyte and friends makes it unambiguous: it’s definitely base-2. Unfortunately, enough people still use SI prefixes for binary things that it remains ambiguous.
It’s worth noting that the article is misleading in one respect. The Darwin utilities are now consistent in providing flags that let you explicitly select either base-2 or base-10 prefixes.
FYI, it’s “Mebi”, not “Megi”. The pattern so far* is that you take the first two letters from the SI prefix, add “bi” at the end.
* quebi- has been suggested, but not yet accepted, as the binary prefix counterpart for quetta-
Looks like the beginning of the end of the fantastic progress in tech that’s resulted from a relative lack of regulation.
Also, probably, a massive spike in grift jobs as people are hired to ensure compliance.
Looks like the beginning of the end for the unnecessarey e-waste provoked by companies forcing obselence and anti-consumer patterns made possible by the lack of regulations.
It’s amazing that no matter how good the news is about a regulation you’ll always be able to find someone to complain about how it harms some hypothetical innovation.
Sure. Possibly that too - although I’d be mildly surprised if the legislation actually delivers the intended upside, as opposed to just delivering unintended consequences.
And just to be clear: the unintended consequences here include the retardation of an industry that’s delivered us progress from 8 bit micros with 64KiB RAM to pervasive Internet and pocket supercomputers in one generation.
Edited to add: I run a refurbished W540 with Linux Mint as a “gaming” laptop, a refurbished T470s with FreeBSD as my daily driver, a refurbished Pixel 3 with Lineage as my phone, and a PineTime and Pine Buds Pro. I really do grok the issues with the industry around planned obsolescence, waste, and consumer hostility.
I just still don’t think the cost of regulation is worth it.
I’m a EU citizen, and I see this argument made every single time the EU passes a new legislation affecting tech. So far, those worries never materialized.
I just can’t see why having removeable batteries would hinder innovation. Each company will still want to sell their prducts, so they will be pressed to find creative ways to have a sleek design while meeting regulations.
Do you think Apple engineers are not capeable of designing AirPods that have a removeable battery? The battery is even in the stem, so it could be as simple as having the stem be de-tacheable. It was just simpler to super-glue everything shut, plus it comes with the benefit of forcing consumers to upgrade once their AirPods have unusable battery life.
Also, if I’m not mistaken it is about service-time replaceable battery, not “drop-on-the-floor-and-your-phone-is-in-6-parts” replaceable as in the old times.
In the specific case of batteries, yep, you’re right. The legislation actually carves special exception for batteries that’s even more manufacturer-friendly than other requirements – you can make devices with batteries that can only be replaced in a workshop environment or by a person with basic repair training, or even restrict access to batteries to authorised partners. But you have to meet some battery quality criteria and a plausible commercial reason for restricting battery replacement or access to batteries (e.g. an IP42 or, respectively, IP67 rating).
Yes, I know, what about the extra regulatory burden: said battery quality criteria are just industry-standard rating methods (remaining capacity after 500 and 1,000 cycles) which battery suppliers already provide, so manufacturers that currently apply the CE rating don’t actually need to do anything new to be compliant. In fact the vast majority of devices on the EU market are already compliant, if anyone isn’t they really got tricked by whoever’s selling them the batteries.
The only additional requirements set in place is that fasteners have to be resupplied or reusable. Most fasteners that also perform electrical functions are inherently reusable (on account of being metallic) so in practice that just means, if your batteries are fastened with adhesive, you have to provide that (or a compatible) adhesive for the prescribed duration. As long as you keep making devices with adhesive-fastened batteries that’s basically free.
i.e. none of this requires any innovation of any kind – in fact the vast majority of companies active on the EU market can keep on doing exactly what they’re doing now modulo exclusive supply contracts (which they can actually keep if they want to, but then they have to provide the parts to authorised repair partners).
Man do I ever miss those days though. Device not powering off the way I’m telling it to? Can’t figure out how to get this alarm app to stop making noise in this crowded room? Fine - rip the battery cover off and forcibly end the noise. 100% success rate.
You’re enjoying those ubiquitous “This site uses cookies” pop-ups, then?
Of course they’re capable, but there are always trade-offs. I am very skeptical that something as tiny and densely packed as an AirPod could be made with removeable parts without becoming a lot less durable or reliable, and/or more expensive. Do you have the hardware/manufacturing expertise to back up your assumptions?
I don’t know where the battery is in an AirPod, but I do know that lithium-polymer batteries can be molded into arbitrary shapes and are often designed to fill the space around the other components, which tends to make them difficult or impossible to remove.
Those aren’t required by law; those happen when a company makes customer-hostile decisions and wants to deflect the blame to the EU for forcing them to be transparent about their bad decisions.
Huh? Using cookies is “user-hostile”? I mean, I actually remember using the web before cookies were a thing, and that was pretty user-unfriendly: all state had to be kept in the URL, and if you hit the Back button it reversed any state, like what you had in your shopping cart.
That kind of cookie requires no popup though, only the ones used to shared info with third parties or collect unwarranted information.
I can’t believe so many years later people still believe the cookie law applies to all cookies.
Please educate yourself: the law explicitly applies only to cookies used for tracking and marketing purposes, not for funcional purposes.
The law also specified that the banner must have a single button to “reject all cookies”, so any website that ask you to go trought a complex flow to reject your consent is not compliant.
It requires consent for all but “strictly necessary” cookies. According to the definitions on that page, that covers a lot more than tracking and marketing. For example “ choices you have made in the past, like what language you prefer”, or “statistics cookies” whose “ sole purpose is to improve website function”. Definitely overreach.
we do know it and it’s a Li-Ion button cell https://guide-images.cdn.ifixit.com/igi/QG4Cd6cMiYVcMxiE.large
FWIW this regulation doesn’t apply to the Airpods. But if for some reason it ever did, and based on the teardown here, the main obstacle for compliance is that the battery is behind a membrane that would need to be destroyed. A replaceable fastener that would allow it to be vertically extracted, for example, would allow for cheap compliance. If Apple got their shit together and got a waterproof rating, I think they could actually claim compliance without doing anything else – it looks like the battery is already replaceable in a workshop environment (someone’s done it here) and you can still do that.
(But do note that I’m basing this off pictures, I never had a pair of AirPods – frankly I never understood their appeal)
Sure, Apple is capable of doing it. And unlike my PinePhone the result would be a working phone ;)
But the issue isn’t a technical one. It’s the costs involved in finding those creative ways, to hiring people to ensure compliance, and especially to new entrants to the field.
It’s demonstrably untrue that the costs never materialise. Speak to business owners about the cost of regulatory compliance sometime. Red tape is expensive.
What is the alternative?
Those companies are clearly engaging in anti-consumer behavior, actively trying to stop right to repair and more.
The industry demonstrated to be incapable of self-regulating, so I think it’s about time to force their hand.
This law can be read in its entirety in a few minutes, it’s reasonable and to the point.
Is that a trick question? The alternative is not regulating, and it’s delivered absolutely stunning results so far. Again: airgapped 8 bit desk toys to pocket supercomputers with pervasive Internet in a generation.
Edited to add: and this isn’t a new problem they’re dealing with; Apple has been pulling various customer-hostile shit moves since Jobs’ influence outgrew Woz’s:
(from https://www.folklore.org/StoryView.py?project=Macintosh&story=Diagnostic_Port.txt )
Edited to add, again: I mean this without snark, coming from a country (Australia) that despite its larrikin reuptation is astoundingly fond of red tape, regulation, conformity, and conservatism. But I think there’s a reason Silicon Valley is in America, and not either Europe or Australasia, and it’s cultural as much as it’s economic.
Did a standard electric plug also stiffle innovation? Or mandates about a car having to fit on a lane?
Laws are the most important safety lines we have, otherwise companies would just optimize for profit in malicious ways.
The reason is literally buckets and buckets of money from defense spending. You should already know this.
It’s not just that. Lots of people have studied this and one of the key reasons is that the USA has a large set of people with disposable income that all speaks the same language. There was a huge amount of tech innovation in the UK in the ’80s and ’90s (contemporaries of Apple, Microsoft, and so on) but very few companies made it to international success because their US competitors could sell to a market (at least) five times the size before they needed to deal with export rules or localisation. Most of these companies either went under because US companies had larger economies of scale or were bought by US companies.
The EU has a larger middle class than the USA now, I believe, but they speak over a dozen languages and expect products to be translated into their own locales. A French company doesn’t have to deal with export regulations to sell in Germany, but they do need to make sure that they translate everything (including things like changing decimal separators). And then, if they want to sell in Spain, they need to do all of that again. This might change in the next decade, since LLM-driven machine translation is starting to be actually usable (helped for the EU by the fact that the EU Parliament proceedings are professionally translated into all member states’ languages, giving a fantastic training corpus).
The thing that should worry American Exceptionalists is that the middle class in China is now about as large as the population of America and they all read the same language. A Chinese company has a much bigger advantage than a US company in this regard. They can sell to at least twice as many people with disposable income without dealing with export rules or localisation than a US company.
That’s one of the reasons but it’s clearly not sufficient. Other countries have spent up on taxpayer’s purse and not spawned a silicon valley of their own.
“Spent up”? At anything near the level of the USA??
Yeah.
https://en.m.wikipedia.org/wiki/History_of_computing_in_the_Soviet_Union
But they failed basically because of the Economic Calculation Problem - even with good funding and smart people, they couldn’t manufacture worth a damn.
https://en.m.wikipedia.org/wiki/Economic_calculation_problem
Money - wherever it comes from - is an obvious prerequisite. But it’s not sufficient - you need a (somewhat at least) free economy and a consequently functional manufacturing capacity. And a culture that rewards, not kills or jails, intellectual independence.
But they did spawn a Silicon Valley of their own:
https://en.wikipedia.org/wiki/Zelenograd
The Wikipedia cites a number of factors:
Government spending tends to help with these kind of things. As it did for the foundations of the Internet itself. Attributing most of the progress we had so far to lack of regulation is… unwarranted at best.
Besides, it’s not like anyone is advocating we go back in time and regulate the industry to prevent current problems without current insight. We have specific problems now that we could easily regulate without imposing too much a cost on manufacturers: there’s a battery? It must be replaceable by the end user. Device pairing prevents third party repairs? Just ban it. Or maybe keep it, but provide the tools to re-pair any new component. They’re using proprietary connectors? Consider standardising it all to USB-C or similar. It’s a game of whack-a-mole, but at least this way we don’t over-regulate.
Beware comrade, folks will come here to make a slippery slope arguments about how requiring battery replacements & other minor guard rails towards consumer-forward, e-waste-reducing design will lead to the regulation of everything & fully stifle all technological progress.
What I’d be more concerned is how those cabals weaponize the legislation in their favor by setting and/or creating the standards. I look at how the EU is saying all these chat apps need to quit that proprietary, non-cross-chatter behavior. Instead of reverting their code to the XMPP of yore, which is controlled by a third-party committee/community, that many of their chats were were designed after, they want to create a new standard together & will likely find a way to hit the minimum legal requirements while still keeping a majority of their service within the garden or only allow other big corporate players to adapt/use their protocol with a 2000-page specification with bugs, inconsistencies, & unspecified behavior.
Whack enough moles and over-regulation is exactly what you get - a smothering weight of decades of incremental regulation that no-one fully comprehends.
One of the reason the tech industry can move as fast as it does is that it hasn’t yet had the time to accumulate this - or the endless procession of grifting consultants and unions that burden other industries.
It isn’t exactly what you get. You’re not here complaining about the fact that your mobile phone electrocutes you or gives you RF burns of stops your TV reception - because you don’t realise that there is already lots of regulation from which you benefit. This is just a bit more, not the straw-man binary you’re making out it to be.
I am curious however: do you see the current situation as tenable? You mention above that there are anti-consumerist practices and the like, but also express concern that regulation will quickly slippery slope away, but I am curious if you think the current system where there is more and more lock in both on the web and in devices can be pried back from those parties?
Why are those results stunning? Is there any reason to think that those improvements were difficult in the first place?
There are a lot of economic incentives, and it was a new field of science application, that has benefited from so many other fields exploding at the same time.
It’s definitely not enough to attribute those results to the lack of regulation. The “utility function” might have just been especially ripe for optimization in that specific local area, with or without regulations.
Now, we see monopolies appearing again and associated anti-consumer decisions to the benefit of the bigger players. This situation is well-known – tragedy of the common situations in markets is never fixed by the players themselves.
Your alternative of not doing anything hinges on the hope that your ideologically biased opinion won’t clash with reality. It’s naive to believe corporations not to attempt to maximize their profits when they have an opportunity.
Well, I guess I am wrong then, but I prefer slower progress, slower computers, and generating less waste than just letting companies do all they want.
This did not happen without regulation. The FCC exists for instance. All of the actual technological development was funded by the government, if not conducted directly by government agencies.
As a customer, I react to this by never voluntarily buying Apple products. And I did buy a Framework laptop when it first became available, which I still use. Regulations that help entrench Apple and make it harder for new companies like Framework to get started are bad for me and what I care about with consumer technology (note that Framework started in the US, rather than the EU, and that in general Europeans immigrate to the US to start technology companies rather than Americans immigrating to the EU to do the same).
Which is reasonable. Earlier albertorestifo spoke about legislation “forc[ing] their hand” which is a fair summary - it’s the use of force instead of voluntary association.
(Although I’d argue that anti-circumvention laws, etc. prescribing what owners can’t do with their devices is equally wrong, and should also not be a thing).
The problem with voluntary association is that most people don’t know what they’re associating with when they buy a new product. Or they think short term, only to cry later when repairing their device is more expensive than buying a new one.
There’s a similar tension at play with GitHub’s rollout of mandatory 2FA: it really annoys me, adding TOTP didn’t improve my security by one iota (I already use KeepassXC), but many people do use insecure passwords, and you can’t tell by looking at their code. (In this analogy GitHub plays the role of the regulator.)
I mean, you’re not wrong. But don’t you feel like the solution isn’t to infantilise people by treating them like they’re incapable of knowing?
For what it’s worth I fully support legislation enforcing “plain $LANGUAGE” contracts. Fraud is a species of violence; people should understand what they’re signing.
But by the same token, if people don’t care to research the repair costs of their devices before buying them … why is that a problem that requires legislation?
They’re not, if we give them access to the information, and there are alternatives. If all the major phone manufacturers produce locked down phones with impossible to swap components (pairing), that are supported only for 1 year, what’s people to do? If people have no idea how secure the authentication of someone is on GitHub, how can they make an informed decision about security?
When important stuff like that is prominently displayed on the package, it does influence purchase decisions. So people do care. But more importantly, a bad score on that front makes manufacturers look bad enough that they would quickly change course and sell stuff that’s easier to repair, effectively giving people more choice. So yeah, a bit of legislation is warranted in my opinion.
I’m not a business owner in this field but I did work at the engineering (and then product management, for my sins) end of it for years. I can tell you that, at least back in 2016, when I last did any kind of electronics design:
Execs will throw their hands in the air and declare anything super-expensive, especially if it requires them to put managers to work. They aren’t always wrong but in this particular case IMHO they are. The additional design-time costs this bill imposes are trivial, and at least some of them can be offset by costs you save elsewhere on the manufacturing chain. Also, well-ran marketing and logistics departments can turn many of its extra requirements into real opportunities.
I don’t want any of these things more than I want improved waterproofing. Why should every EU citizen that has the same priorities I do not be able to buy a the device they want?
Then I have some very good news for you!
The law doesn’t prohibit waterproof devices. In fact, it makes clear expections for such cases. It mandates that the battery must be repleaceable without specialized tools and by any competent shop, it doesn’t mandate a user-replaceable battery.
I don’t want to defend the bill (I’m skeptical of politicians making decisions on… just about anything, given how they operate) but I don’t think recourse to history is entirely justified in this case.
For one thing, good repairability and support for most of (if not throughout) a device’s useful lifetime was the norm for a good part of that period, and it wasn’t a hardware-only deal. Windows 3.1 was supported until 2001, almost twice longer than the bill demands. NT 3.1 was supported for seven years, and Windows 95 was supported for 6. IRIX versions were supported for 5 (or 7?) years, IIRC.
For another, the current state of affairs is the exact opposite of what deregulation was supposed to achieve, so I find it equally indefensible on (de)regulatory grounds alone. Manufacturers are increasingly convincing users to upgrade not by delivering better and more capable products, but by making them both less durable and harder to repair, and by restricting access to security updates. Instead of allowing businesses to focus on their customers’ needs rather than state-mandated demands, it’s allowing businesses to compensate their inability to meet customer expectations (in terms of device lifetime and justified update threshold) by delivering worse designs.
I’m not against that on principle but I’m also not a fan of footing the bill for all the extra waste collection effort and all the health hazards that generates. Private companies should be more than well aware that there’s no such thing as a free lunch.
Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.
Deregulation is the “ground state”.
It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.
Conveniently, it fosters an environment of rapid growth in wealth, capability, and efficiency. Because when companies do what you’re suggesting - nerfing their products to drive revenue - customers go elsewhere.
Which is why you’ll see the greatest proponents of regulation are the companies themselves, these days. Anti-circumvention laws, censorship laws that are only workable by large companies, Government-mandated software (e.g. Korean banking, Android and iOS only identity apps in Australia) and so forth are regulation aimed against customers.
So there’s a part of me that thinks companies are reaping what they sowed, here. But two wrongs don’t make a right; the correct answer is to deregulate both ends.
Maybe. Most early home computers were expensive. People expected them to last a long time. In the late ’80s, most of the computers that friends of mine owned were several years old and lasted for years. The BBC Model B was introduced in 1981 and was still being sold in the early ‘90s. Schools were gradually phasing them out. Things like the Commodore 64 of Sinclair Spectrum had similar longevity. There were outliers but most of them were from companies that went out of business and so wouldn’t be affected by this kind of regulation.
That’s not really true. It assumes a balance of power that is exactly equal between companies and consumers.
Companies force people to upgrade by tying in services to the device and then dropping support in the services for older products. No one buys a phone because they want a shiny bit of plastic with a thinking rock inside, they buy a phone to be able to run programs that accomplish specific things. If you can’t safely connect the device to the Internet and it won’t run the latest apps (which are required to connect to specific services) because the OS is out of date, then they need to upgrade the OS. If they can’t upgrade the OS because the vendor doesn’t provide an upgrade and no one else can because they have locked down the bootloader (and / or not documented any of the device interfaces), then consumers have no choice to upgrade.
Only if there’s another option. Apple controls their app store and so gets a 30% cut of app revenue. This gives them some incentive to support old devices, because they can still make money from them, but they will look carefully at the inflection point where they make more money from upgrades than from sales to older devices. For other vendors, Google makes money from the app store and they don’t[1] and so once a handset has shipped, the vendor has made as much money as they possibly can. If a vendor makes a phone that gets updates longer, then it will cost more. Customers don’t see that at point of sale, so they don’t buy it. I haven’t read the final version of this law, one of the drafts required labelling the support lifetime (which research has shown will have a big impact - it has a surprisingly large impact on purchasing decisions). By moving the baseline up for everyone, companies don’t lose out by being the one vendor to try to do better.
Economists have studied this kind of market failure for a long time and no one who actually does research in economics (i.e. making predictions and trying to falsify them, not going on talk shows) has seriously proposed deregulation as the solution for decades.
Economies are complex systems. Even Adam Smith didn’t think that a model with a complete lack of regulation would lead to the best outcomes.
[1] Some years ago, the Android security team was complaining about the difficulties of support across vendors. I suggested that Google could fix the incentives in their ecosystem by providing a 5% cut of all app sales to the handset maker, conditional on the phone running the latest version of Android. They didn’t want to do that because Google maximising revenue is more important than security for users.
That is remarkably untrue. At least one entire school of economics proposes exactly that.
In fact, they dismiss the entire concept of market failure, because markets exist to provide pricing and a means of exchange, nothing more.
“Market failure” just means “the market isn’t producing the prices I want”.
Is the school of economics you’re talking about actual experimenters, or are they arm-chair philosophers? I trust they propose what you say they propose, but what actual evidence do they have?
I might sound like I’m dismissing an entire scientific discipline, but economics have shown strong signs of being extremely problematic on this front for a long time. One big red flag for instance is the existence of such long lived “schools”, which are a sign of dogma more than they’re a sign of sincere inquiry.
Assuming there’s no major misunderstanding, there’s another red flag right there: markets have a purpose now? Describing what markets do is one thing, but ascribing purpose to them presupposes some sentient entity put them there with intent. Which may very well be true, but then I would ask a historian, not an economist.
Now looking at the actual purpose… the second people exchange stuff for a price, there’s a pricing and a means of exchange. Those are the conditions for a market. Turning it around and making them the “purpose” of market is cheating: in effect, this is saying markets can’t fail by definition, which is quite unhelpful.
This is why I specifically said practicing economists who make predictions. If you actually talk to people who do research in this area, you’ll find that they’re a very evidence-driven social science. The people at the top of the field are making falsifiable predictions based on models and refining their models when they’re wrong.
Economics is intrinsically linked to politics and philosophy. Economic models are like any other model: they predict what will happen if you change nothing or change something, so that you can see whether that fits with your desired outcomes. This is why it’s so often linked to politics and philosophy: Philosophy and politics define policy goals, economics lets you reason about whether particular actions (or inactions) will help you reach those goals. Mechanics is linked to engineering in the same way. Mechanics tells you whether a set of materials arranged in a particular way will be stable, engineering says ‘okay, we want to build a bridge’ and then uses models from mechanics to determine whether the bridge will fall down. In both cases, measurement errors or invalid assumptions can result in the goals not being met when the models say that they should be and in both cases these lead to refinements of the models.
To people working in the field, the schools are just shorthand ways of describing a set of tools that you can use in various contexts.
Unfortunately, most of the time you hear about economics, it’s not from economists, it’s from people who play economists on TV. The likes of the Cato and Mises institutes in the article, for example, work exactly the wrong way around: they decide what policies they want to see applied and then try to tweak their models to justify those policies, rather than looking at what goals they want to see achieved and using the models to work out what policies will achieve those goals.
I really would recommend talking to economists, they tend to be very interesting people. And they hate the TV economists with a passion that I’ve rarely seen anywhere else.
Markets absolutely have a purpose. It is always a policy decision whether to allow a market to exist. Markets are a tool that you can use to optimise production to meet demand in various ways. You can avoid markets entirely in a planned economy (but please don’t, the Great Leap Forward or the early days of the USSR give you a good idea of how many people will die if you do). Something that starts as a market can end up not functioning as a market if there’s a significant power imbalance between producers and consumers.
Markets are one of the most effective tools that we have for optimising production for requirements. Precisely what they will optimise for depends a lot on the shape of the market and that’s something that you can control with regulation. The EU labelling rules on energy efficiency are a great example here. The EU mandated that white goods carry labels showing the score that they got on energy-efficiency tests. The labelling added information to customer and influenced their purchasing decisions. This created demand for more energy-efficient goods and the market responded by providing them. The eventually regulations banned goods below a certain efficiency rating but it was largely unnecessary because the market adjusted and most things were A rated or above when F ratings were introduced. It worked so well that they had to recalibrate the scale.
I can see how such usurpation could distort my view.
Well… yeah.
I love this example. Plainly shows that often people don’t make the choices they do because they don’t care about such and such criterion, they do so because they just can’t measure the criterion even if they cared. Even a Libertarian should admit that making good purchase decisions requires being well informed.
To be honest I do believe some select parts of the economy should be either centrally planned or have a state provider that can serve everyone: roads, trains, water, electricity, schools… Yet at the same time, other sectors probably benefit more from a Libertarian approach. My favourite example is the Internet: the fibre should be installed by public instances (town, county, state…), and bandwidth rented at a flat rate — no discount for bigger volumes. And then you just let private operators rent the bandwidth however they please, and compete among each other. The observed results in the few places in France that followed this plan (mostly rural areas big private providers didn’t want to invest in) was a myriad of operators of all sizes, including for-profit and non-profit ones (recalling what Benjamin Bayart said of the top of my head). This gave people an actual choice, and this diversity inherently makes this corner of the internet less controllable and freer.
A Libertarian market on top of a Communist infrastructure. I suspect we can find analogues in many other domains.
This is great initially, but it’s not clear how you pay for upgrades. Presumably 1 Gb/s fibre is fine now, but at some point you’re going to want to migrate everyone to 10 Gb/s or faster, just as you wanted to upgrade from copper to fibre. That’s going to be capital investment. Does it come from general taxation or from revenue raised on the operators? If it’s the former, how do you ensure it’s equitable, if it’s the latter then you’re going to want to amortise the cost across a decade and so pricing sufficiently that you can both maintain the current infrastructure and save enough to upgrade to as-yet-unknown future technology can be tricky.
The problem with private ownership of utilities is that it encourages rent seeking and cutting costs at the expense of service and capital investment. The problem with public ownership is that it’s hard to incentivise efficiency improvements. It’s important to understand the failure modes of both options and ideally design hybrids that avoid the worst problems of both. The problem is that most politicians start with ‘privatisation is good’ or ‘privatisation is bad’ as an ideological view and not ‘good service, without discrimination, at an affordable price is good’ and then try to figure out how to achieve it.
Yes, that’s the point: the most capitalistic something is (extreme example: nuclear power plants), the more difficult private enterprises will want to invest in it, and if they do, the more they will want to extract rent from their investment. There’s also the thing about fibre (or copper) being naturally monopolistic, at least if you have a mind to conserve resources and not duplicate lines all over the place.
So there is a point where people must want the thing badly enough that the town/county/state does the investment itself. As it does for any public infrastructure.
Not saying this would be easy though. The difficulties you foresee are spot on.
Ah, I see. Part of this can be solved by making sure the public part is stable, and the private part easy to invest on. For instance, we need boxes and transmitters and whatnot to lighten up the fibre. I speculate that those boxes are more liable to be improved than the fibre itself, so perhaps we could give them to private interests. But this is reaching the limits of my knowledge of the subject, I’m not informed enough to have an opinion on where the public/private frontier is best placed.
Good point, I’ll keep that in mind.
There’s a lot of nuance here. Private enterprise is quite good at high-risk investments in general (nuclear power less so because it’s regulated such that you can’t just go bankrupt and walk away, for good reasons). A lot of interesting infrastructure were possible because private investors gambled and a lot of them lost a big pile of money. For example, the Iridium satellite phone network cost a lot to deliver and did not recoup costs. The initial investors lost money, but then the infrastructure was for sale at a bargain price and so it ended up being operated successfully. It’s not clear to me how public investment could have matched that (without just throwing away tax payers’ money).
This was the idea behind some of the public-private partnership things that the UK government pushed in the ‘90s (which often didn’t work, you can read a lot of detailed analyses of why not if you search for them): you allow the private sector to take the risk and they get a chunk of the rewards if the risk pays off but the public sector doesn’t lose out if the risk fails. For example, you get a private company to build a building that you will lease from them. They pay all of the costs. If you don’t need the building in five years time then it’s their responsibility to find another tenant. If the building needs unexpected repairs, they pay for them. If everything goes according to plan, you pay a bit more for the building space than if you’d built, owned, and operated it yourself. And you open it out to competitive bids, so if someone can deliver at a lower cost than you could, you save money.
Some procurement processes have added variations on this where the contract goes to the second lowest bidder or they the winner gets paid what the next-lowest bidder asked for. The former disincentivises stupidly low bids (if you’re lower than everyone else, you don’t win), the latter ensures that you get paid as much as someone else thought they could deliver, reducing risk to the buyer. There are a lot of variations on this that are differently effective and some economists have put a lot of effort into studying them. Their insights, sadly, are rarely used.
The dangerous potholes throughout UK roads might warn you that this doesn’t always work.
Good point. We need to make sure that these gambles stay gambles, and not, say, save the people who made the bad choice. Save their company perhaps, but seize it in the process. We don’t want to share losses while keeping profits private — which is what happens more often than I’d like.
The intent is good indeed, and I do have an example of a failure in mind: water management in France. Much of it is under a private-public partnership, with Veolia I believe, and… well there are a lot of leaks, a crapton of water is wasted (up to 25% in some of the worst cases), and Veolia seems to be making little more than a token effort to fix the damn leaks. Probably because they don’t really pay for the loss.
It’s often a matter oh how much money you want to put in. Public French roads are quite good, even if we exclude the super highways (those are mostly privatised, and I reckon in even better shape). Still, point taken.
Were they actually successful, or did they only decrease operating energy use? You can make a device that uses less power because it lasts half as long before it breaks, but then you have to spend twice as much power manufacturing the things because they only last half as long.
I don’t disagree with your comment, by the way. Although, part of the problem with planned economies was that they just didn’t have the processing power to manage the entire economy; modern computers might make a significant difference, the only way to really find out would be to set up a Great Leap Forward in the 21st century.
I may be misunderstanding your question but energy ratings aren’t based on energy consumption across the device’s entire lifetime, they’re based on energy consumption over a cycle of operation of limited duration, or a set of cycles of operations of limited duration (e.g. a number of hours of functioning at peak luminance for displays, a washing-drying cycle for washer-driers etc.). You can’t get a better rating by making a device that lasts half as long.
Energy ratings and device lifetimes aren’t generally linked by any causal relation. There are studies that suggest the average lifetime for (at least some categories of) household appliances have been decreasing in the last decades, but they show about the same thing regardless of jurisdiction (i.e. even those without labeling or energy efficiency rules, or with different labeling rules) and it’s a trend that started prior to energy efficiency labeling legislation in the EU.
Not directly, but you can e.g. make moving parts lighter/thinner, so they take less power to move but break sooner as a result of them being thinner.
That’s good to hear.
For household appliances, energy ratings are given based on performance under full rated capacities. Moving parts account for a tiny fraction of that in washing machines and washer-driers, and for a very small proportion of the total operating power in dishwashers and refrigerators (and obviously no proportion for electronic displays and lighting sources). They’re also given based on measurements of KWh/cycle rounded to three decimal places.
I’m not saying making some parts lighter doesn’t have an effect for some of the appliances that get energy ratings, but that effect is so close to the rounding error that I doubt anyone is going to risk their warranty figures for it. Lighter parts aren’t necessarily less durable, so if someone’s trying to get a desired rating by lightening the nominal load, they can usually get the same MTTF with slightly better materials, and they’ll gladly swallow some (often all) of the upfront cost just to avoid dealing with added uncertainty of warranty stocks.
Much like orthodox Marxism-Leninism, the Austrian School describes economics by how it should be, not how it actually is.
The major problem with orphans was lack of access to proprietary parts – they were otherwise very repairable. The few manufacturers that can afford proprietary parts today (e.g. Apple) aren’t exactly at risk of going under, which is why that fear is all but gone today.
I have like half a dozen orphan boxes in my collection. Some of them were never sold on Western markets, I’m talking things like devices sold only on the Japanese market for a few years or Soviet ZX Spectrum clones. All of them are repairable even today, some of them even with original parts (except, of course, for the proprietary ones, which aren’t manufactured anymore so you can only get them from existing stocks, or use clone parts). It’s pretty ridiculous that I can repair thirty year-old hardware just fine but if my Macbook croaks, I’m good for a new one, and not because I don’t have (access to) equipment but because I can’t get the parts, and not because they’re not manufactured anymore but because no one will sell them to me.
Deregulation was certainly meant to achieve a lot of things in particular. Not just general outcomes, like a more competitive landscape and the like – every major piece of deregulatory legilslation has had concrete goals that it sought to achieve. Most of them actually achieved them in the short run – it was conserving these achievements that turned out to be more problematic.
As for companies not being able to force customers not to upgrade, repair or tinker with their devices, that is really not true. Companies absolutely can and do force customers to not upgrade or repair their devices. For example, they regularly use exclusive supply deals to ensure that customers can’t get the parts they need for it, which they can do without leveraging any government-mandated regulation.
Some of their means are regulation-based – e.g. they take them customers or third-parties to court (see e.g. Apple. For most devices, tinkering with them in unsupported ways is against the ToS, too, and while there’s always doubt on how much of that is legally enforceable in each jurisdiction out there, it still carries legal risk, in addition to the weight of force in jurisdictions where such provisions have actually been enforced.
This is very far from a state of minimal initiation of force. It’s a state of minimal initiation of force on the customer end, sure – customers have little financial power (both individually and in numbers, given how expensive organisation is), so in the absence of regulation they can leverage, they have no force to initiate. But companies have considerable resources of force at their disposal.
It’s not like there was heavy progress the last 10 years on smartphone hardware.
Since 2015 every smartphone is the same as the previous model, with a slightly better camera and a better chip. I don’t see how the regulation is making progress more difficult. IMHO it will drive innovation, phones will have to be made more durable.
And, for most consumers, the better camera is the only thing that they notice. An iPhone 8 is still massively overpowered for what a huge number of consumers need, and it was released five years ago. If anything, I think five years is far too short a time to demand support.
Until that user wants to play a mobile game–in which like PC hardware specs were propelled by gaming, so is the mobile market driven by games which I believe is now the most dominant gaming platform.
I don’t think the games are really that CPU / GPU intensive. It’s definitely the dominant gaming platform, but the best selling games are things like Candy Crush (which I admit to having spent far too much time playing). I just upgraded my 2015 iPad Pro and it was fine for all of the games that I tried from the app store (including the ones included with Netflix and a number of the top-ten ones). The only thing it struggled with was the Apple News app, which seems to want to preload vast numbers of articles and so ran out of memory (it had only 2 GiB - the iPhone version seems not to have this problem).
The iPhone 8 (five years old) has an SoC that’s two generations newer than my old iPad, has more than twice as much L2 cache, two high-performance cores that are faster than the two cores in mine (plus four energy-efficient cores, so games can have 100% use of the high-perf ones), and a much more powerful GPU (Apple in-house design replacing a licensed PowerVR one in my device). Anything that runs on my old iPad will barely warm up the CPU/GPU on an iPhone 8.
But a lot are intensive & enthusiasts often prefer it. But still those time-waster types and e-sports tend to run on potatoes to grab the largest audience.
Anecdotally, I recently was reunited with my OnePlus 1 (2014) running Lineage OS, & it was choppy at just about everything (this was using the apps from when I last used it (2017) in airplane mode so not just contemporary bloat) especially loading map tiles on OSM. I tried Ubuntu Touch on it this year (2023) (listed as great support) & was still laggy enough that I’d prefer not to use it as it couldn’t handle maps well. But even if not performance bottle-necked, efficiency is certainly better (highly doubt it’d save more energy than the cost of just keeping an old device, but still).
My OnePlus 5T had an unfortunate encounter with a washing machine and tumble dryer, so now the cellular interface doesn’t work (everything else does). The 5T replaced a first-gen Moto G (which was working fine except that the external speaker didn’t work so I couldn’t hear it ring. I considered that a feature, but others disagreed). The Moto G was slow by the end. Drawing maps took a while, for example. The 5T was fine and I’d still be using it if I hadn’t thrown it in the wash. It has an 8-core CPU, 8 GiB of RAM, and an Adreno 540 GPU - that’s pretty good in comparison to the laptop that I was using until very recently.
I replaced the 5T with a 9 Pro. I honestly can’t tell the difference in performance for anything that I do. The 9 Pro is 4 years newer and doesn’t feel any faster for any of the apps or games that I run (and I used it a reasonable amount for work, with Teams, Word, and PowerPoint, which are not exactly light apps on any platform). Apparently the GPU is faster and the CPU has some faster cores but I rarely see anything that suggests that they’re heavily loaded.
Original comment mentioned iPhone 8 specifically. Android situation is completely different.
Apple had a significant performance lead for a while. Qualcomm just doesn’t seem to be interested in making high-end chips. They just keep promising that their next-year flagship will be almost as fast as Apple’s previous-year baseline. Additionally there are tons of budget Mediatek Androids that are awfully underpowered even when new.
Flagship Qualcomm chips for Android chips been fine for years & more than competitive once you factor in cost. I would doubt anyone is buying into either platform purely based on performance numbers anyhow versus ecosystem and/or wanting hardware options not offered by one or the other.
That’s what I’m saying — Qualcomm goes for large volumes of mid-range chips, and does not have products on the high end. They aren’t even trying.
BTW, I’m flabbergasted that Apple put M1 in iPads. What a waste of a powerful chip on baby software.
Uh, what about their series 8xx SoC’s? On paper they’re comparable to Apple’s A-series, it’s the software that usually is worse.
Still a massacre.
Yeah, true, I could have checked myself. Gap is even bigger right now than two years ago.
Q is in self-inflicted rut enabled by their CDMA stranglehold. Samsung is even further behind because their culture doesn’t let them execute.
https://cdn.arstechnica.net/wp-content/uploads/2022/09/iPhone-14-Geekbench-5-single-Android-980x735.jpeg
https://cdn.arstechnica.net/wp-content/uploads/2022/09/iPhone-14-Geekbench-Multi-Android-980x735.jpeg
Those are some cherry-picked comparisons. Apple release on a different cadence. You check right now, & S23 beats up on it as do most flagship now. If you blur the timing, it’s all about the same.
It would cost them more to develop and commission to fabrication of a more “appropriate” chip.
The high-end Qualcomm is fine. https://www.gsmarena.com/compare.php3?idPhone1=12082&idPhone3=11861&idPhone2=11521#diff- (may require viewing as a desktop site to see 3 columns)
With phones of the same tier released before & after you can see benchmarks are all close as is battery life. Features are wildly different tho since Android can offer a range of different hardware.
It doesn’t for laptops[1], so I doubt it would for smartphones either.
[1] https://www.lowtechmagazine.com/2020/12/how-and-why-i-stopped-buying-new-laptops.html
I think you’re really discounting the experiences of consumers to say they don’t notice the UI and UX changes made possible on the Android platform by improvements in hardware capabilities.
I notice that you’re not naming any. Elsewhere in the thread, I pointed out that I can’t tell the difference between a OnePlus 5T and a 9 Pro, in spite of them being years apart in releases. They can run the same version of Android and the UIs seem identical to me.
I didn’t think I had to. Android 9, 10, 11, 12 have distinct visual styles, and between vendors this distinction can further - this may be less apparent on OnePlus as they use their own OxygenOS (AOSP upstream ofc) (or at least, used to), but consumers notice even if they can’t clearly relate what they’ve noticed.
I’m using LimeageOS and both phones are running updated versions of the OS. Each version has made the settings app more awful but I can’t point to anything that’s a better UI or anything that requires newer hardware. Rendering the UI barely wakes up the GPU on the older phone. So what is new, better, and is enabled by newer hardware?
I can’t argue either way for “better”, I’m not the market. Newer hardware generally has better capability for graphics processing, leading to more reactive displays at higher refresh rates, and enabling compositing settings and features that otherwise wouldn’t run at an acceptable frame rate.
LineageOS is an AOSP build specifically designed to run fast and support legacy hardware, and is designed to look the same on all that hardware. It’s not a fair comparison to what people like to see with smartphone interfaces and launchers etc.
So please name one of them. A 2017 phone can happily run a 1080p display at a fast enough refresh that I’ve no idea what it is because it’s faster than my eyes can detect, with a full compositing UI. Mobile GPUs have been fast enough to composite every UI element from a separate texture, running complex pixel shaders on them, for ten years. OS X started doing this on laptops over 15 years ago, with integrated Intel graphics cards that are positively anaemic in comparison to anything in a vaguely recent phone. Android has provided a compositing UI toolkit from day one. Flutter, with its 60FPS default, runs very happily on a 2017 phone.
If it helps, I’m actually using the Microsoft launcher on both devices. But, again, you’re claiming that there are super magic UI features that are enabled by new hardware without saying what they are.
All innovation isn’t equal. Innovation that isn’t wanted by customers or their suppliers is malinvestment - a waste of human capacity, wealth, and time.
What makes you think that this innovation is not wanted by customers?
There is innovation that is wanted by customers, but manufacturers don’t provide it because it goes against their interest. I think it’s a lie invisible-hand-believers tell themselves when claiming that customers have a choice between a fixable phone and a glued-phone with an appstore. Of course customers will chose the glued-phone with an app store, because they want a usable phone first. But this doesn’t mean they don’t want a fixable phone, it means that they were given a Hobson’s choice
The light-bulb cartel is the single worst example you could give; incandescent light-bulbs are dirt-cheap to replace and burning them hotter ends up improving the quality of their light (i.e. color) dramatically, while saving more in reduced power bills than they cost from shorter lifetimes. This 30min video by Technology Connections covers the point really well.
Okay, that was sloppy of me.
“Not wanted more than any of the other features on offer.”
“Not wanted enough to motivate serious investment in a competitor.”
That last is most telling.
This cynical view is unwarranted in the case of EU, which so far is doing pretty well avoiding regulatory capture.
EU has a history of actually forcing companies to innovate in important areas that they themselves wouldn’t want to, like energy efficiency and ecological impact. And their regulations are generally set to start with realistic requirements, and are tightened gradually.
Not everything will sort itself out with consumers voting with their wallets. Sometimes degenerate behaviors (like vendor lock-in, planned obsolescence, DRM, spyware, bricking hardware when subscription for it expires) universally benefit companies, so all choices suck in one way or another. There are markets with high barriers to entry, especially in high-end electronics, and have rent-seeking incumbents that work for their shareholders’ interests, not consumers.
Ecodesign worked out wonderfully for vacuum cleaners, but that’s an appliance that hasn’t meaningfully changed since the 1930s. (You could argue that stick vacuum cleaners are different, but ecodesign certainly didn’t prevent them from entering the market)
The smartphone market has obviously been stagnating for a while, so it’ll be interesting to see if ecodesign can shake it up.
I strongly disagree here. They’ve changed massively since the ’90s. Walking around a vacuum cleaner shop in the ’90s, you had two choices of core designs. The vast majority had a bag that doubled as an air filter, pulling air through the bag and catching dust on the way. This is more or less the ’30s design (though those often had separate filters - there were quite a lot of refinements in the ’50s and ’60s - in the ’30s they were still selling ones that required a central compressor in the basement with pneumatic tubes that you plugged the vacuum cleaner into in each room).
Now, if you buy a vacuum cleaner, most of them use centrifugal airflow to precipitate heavy dust and hair, along with filters to catch the finer dust. Aside from the fact that both move air using electric motors, this is a totally different design to the ’30s models and to most of the early to mid ’90s models.
More recently, cheap and high-density lithium ion batteries have made cordless vacuums actually useful. These have been around since the ‘90s but they were pointless handheld things that barely functioned as a dustpan and brush replacement. Now they’re able to replace mains-powered ones for a lot of uses.
Oh, and that’s not even counting the various robot ones that can bounce around the floor unaided. These, ironically, are the ones whose vacuum-cleaner parts look the most like the ’30s design.
Just to add to that, the efficiency of most electrical home appliances has improved massively since the early ‘90s. With a few exceptions, like things based on resistive heating, which can’t improve much because of physics (but even some of those got replaced by devices with alternative heating methods) contemporary devices are a lot better in terms of energy efficiency. A lot of effort went into that, not only on the electrical end, but also on the mechanical end – vacuum cleaners today may look a lot like the ones in the 1930s but inside, from materials to filters, they’re very different. If you handed a contemporary vacuum cleaner to a service technician from the 1940s they wouldn’t know what to do with it.
Ironically enough, direct consumer demand has been a relatively modest driver of ecodesign, too – most consumers can’t and shouldn’t be expected to read power consumption graphs, the impact of one better device is spread across at least a two months’ worth of energy bills, and the impact of better electrical filtering trickles down onto consumers, so they’re not immediately aware of it. But they do know to look for energy classes or green markings or whatever.
The eco labelling for white goods was one of the inspirations for this law because it’s worked amazingly well. When it was first introduced, most devices were in the B-C classification or worse. It turned out that these were a very good nudge for consumers and people were willing to pay noticeably more for higher-rated devices, to the point that it became impossible to sell anything with less than an A rating. They were forced to recalibrate the scheme a year or two ago because most things were A+ or A++ rated.
It turns out that markets work very well if customers have choice and sufficient information to make an informed choice. Once the labelling was in place, consumers were able to make an informed choice and there was an incentive for vendors to provide better quality on an axis that was now visible to consumers and so provided choice. The market did the rest.
Labeling works well when there’s a somewhat simple thing to measure to get the rating of each device - for a fridge it’s power consumption. It gets trickier when there’s no easy way to determine which of two devices is “better” - what would we measure to put a rating on a mobile phone or a computer?
I suppose the main problem is that such devices are multi-purpose - do I value battery life over FLOPS, screen brightness over resolution, etc. Perhaps there could be a multi-dimensional rating system (A for battery life, D for gaming performance, B for office work, …), but that gets unpractical very quickly.
There’s some research by Zinaida Benenson (I don’t have the publication to hand, I saw the pre-publication results) on an earlier proposal for this law that looked at adding two labels:
The proposal was that there would be statutory fines for devices that did not comply with the SLA outlined in those two labels but companies are free to put as much or as little as they wanted. Her research looked at this across a few consumer good classes and used the standard methodology where users were shown a small number of devices with different specs and different things on these labels and then asked to pick their preference. This was then used to vary price, features, and security SLA. I can’t remember the exact numbers but she found that users consistently were willing to select higher priced things with better security guarantees, and favoured them over some other features.
All the information I’ve read points to centrifugal filters not being meaningfully more efficient or effective than filter bags, which is why these centrifugal cylones are often backed up by traditional filters. Despite what James Dyson would have us believe, building vacuum cleaners is not like designing a Tokamak. I’d use them as an example of a meaningless change introduced to give consumers an incentive to upgrade devices that otherwise last decades.
Stick (cordless) vacuums are meaningfully different in that the key cleaning mechanism is no longer suction force. The rotating brush provides most of the cleaning action, coupled with a (relatively) weak suction provided by the cordless engines. This makes them vastly more energy-efficient, although this is probably cancelled out by the higher impact of production, and the wear and tear on the components.
It also might be a great opportunity for innovation in modular design. Say, Apple is always very proude when they come up with a new design. Remember a 15 min mini-doc on their processes when they introduced unibody macbooks? Or 10 min video bragging about their laminated screens?
I don’t see why it can’t be about how they designed a clever back cover that can be opened without tools to replace the battery and also waterproof. Or how they came up with a new super fancy screen glass that can survive 45 drops.
Depending on how you define “progress” there can be a plenty of opportunities to innovate. Moreover, with better repairability there are more opportunities for modding. Isn’t it a “progress” if you can replace one of the cameras on your iPhone Pro with, say, infrared camera? Definitely not a mainstream feature to ever come to mass-produced iPhone but maybe a useful feature for some professionals. With available schematics this might have a chance to actually come to market. There’s no chance for it to ever come to a glued solid rectangle that rejects any part but the very specific it came with from the factory.
Phones have not made meaningful progress since the first few years of the iphone. Its about time
That’s one way to think about it. Another is that shaping markets is one of the primary jobs of the government, and a representative government – which, for all its faults, the EU is – delegates this job to politics. And folks make a political decision on the balance of equities differently, and … well, they decide how the markets should look. I don’t think that “innovation” or “efficiency” at providing what the market currently provides is anything like a dispositive argument.
thank god
There’s a chance that tech companies start to make EU-only hardware.
This overall shift will favor long term R&D investments of the kind placed before our last two decades of boom. It will improve innovation in the same way that making your kid eats vegetables improves their health. This is necessary soil for future booms.
I reckon this is a bad take.
Lists are fine.
Let’s take at its face value claim that numbered lists are needed to facilitate references to specific items in the list. That case is covered very well by current standards. One just have to properly use them. And it goes something like this:
Now, elsewhere, where you want to reference the items you need to properly link to them:
<a class="list-reference" href="#item1">...</a>
. And properly style the link:This way you get generated item numbering consistent though out the document. Whatever you do to the list (add items, remove them, reorder) the document still remains consistent and you don’t have to edit every single reference whenever the list changes.
Now, the issue of copying the generated counters is still present. It can be rectified with a little bit of JS to inline the generated item numbering. It would use the very same API that you probably saw used in most annoying cases where a website adds some sort of attribution to the copied text.
I believe, this is why law was brought up at all: it uses these references a lot.
No list is above the law.
I concede though that law probably doesn’t fit into this solution neatly. The main problem with law is that numbering has to stay consistent not only across the document but also across editions. So that all other documents at all times could reference the same item in the law regardless of its edition.
I’m not sure what is the practice in the USA but in my country common practice is that the latest version of a particular law contains the latest edition of a changed item. Deleted items a replaced with a placeholder (e.g. “6.4.a has been removed by such and such other document”). And new items are added at the end of the corresponding level of nesting.
In this configuration it’s even possible to use the method proposed above.
The whole argument of semantics is moot.
If one doesn’t like HTML
ol
they can invent their own custom element and style it however they want. Replicating default list style would take maybe 20 lines of CSS and there’s not that much interactivity in it to worry much about accessibility issues that might come with customselect
implementation. If you want to get really fancy, you can invent your own XML schema and style it with XSLT/CSS to your heart’s content. These are all ancient (in internet years) technologies that are well supported by all modern browsers, including mobile. Shortcoming of ordered lists in HTML are very likely pretty low on the (ordered) list of reasons for the downfall of civilization. And I suspect that this might an argument made not quite in good faith.I wonder where this idea comes from, or rather where the author of the article got it.
The OP doesn’t directly mention it but they gesture towards the idea insisting that the main function of numbering in lists in law is actually identification.
I agree and I believe it’s empirically true. Many laws came into effect many decades ago. Since then many amendments were made to them but they are still standing in their current edition. At the same time particular sections/paragraphs have been referenced in external documents be it court proceedings/rulings, other laws, or a massive amount of writing about laws ranging from books, to blog posts. We don’t want to invalidate older references with every amendment we make to the law. For one, this would make it very hard to make sense of old court rulings as one would have to find a specific law edition that was in effect at the time to for it to make sense. Stable identifiers for pieces of law seem like a good idea in the pre-computer time (which is right now in the realm of law).
That’s not how that works. Sections get renumbered all the time. This is and was the bread and butter of legal publishers.