Isn’t this 100% GitHub’s fault? They’re allowing malicious actors to impersonate them on their own platform. Shouldn’t this expose GitHub to litigation?
The attack began with the attackers somehow obtaining their targets’ personal GitHub access tokens, which Checkmarx has no insight into.
So, first the attackers somehow get the credentials from the user via some out of band mechanism. At this point, they are already able to do pretty much anything that the user can do. Possibly they have a limited-access token, but they can at least post PRs impersonating the repo owner. At that point, the PR appears to come from a bot that you’ve given trust to and so you trust it.
The real question is how did they get the PATs. If that’s a GitHub vulnerability, that’s bad. If that’s users leaking their credentials then users need to not do that, just as they also need to not leak their SSH private keys.
This is such a promising system, but it requires that banks to try out and adopt the system for it to be viable. So far from what I’ve heard, it is mostly student-run university snack kiosks that use it.
I realize this is an incredibly tall order, but does anyone know if there have even been experiments where FLOSS advocates run their own credit union in order to break new ground on stuff like this?
I’ve also heard that these small scale deployments are something the Taler org wants make easier to do, so maybe there will be more of them in the future
Crypto while neat will never see full-hearted support from the likes of say the EU. The value add of Taler is that one can mint essentially an IOU, send it anonomysly, and then when the recipient wants to claim it then their bank nor my bank will have no way of knowing who at my bank sent the money / created the IOU. You can also tag the money with how it may be spent, should you want bar your child from buying smut for example. Then there is lots of double-spending protection and such.
Complaints about the screen resolution are a matter of aesthetics, unless you work on visual digital media. In practice, a low resolution is often easier to use because it doesn’t require you to adjust the scaling, which often doesn’t work for all programs.
That said, the X220 screen is pathetically short. The 4:3 ThinkPads are much more ergonomic, and the keyboards are better than the **20 models (even if they look similar). Unfortunately the earlier CPU can be limiting due to resource waste on modern websites, but it’s workable.
The ergonomics of modern thin computers are worse still than the X220. A thin laptop has a shorter base to begin with, and the thinness requires the hinges to pull the base of the top down when it’s opened, lowering the screen further. The result is that the bottom of the screen is a good inch lower than on a thick ThinkPad, inducing that much more forward bending in the user’s upper spine.
The top of the screen of my 15” T601 frankenpad is 10” above my table and 9.75” above the keyboard. Be jealous.
Complaints about the screen resolution are a matter of aesthetics, unless you work on visual digital media.
A matter of aesthetics if the script your language uses has a small number of easily distinguished glyphs.
As someone who frequently reads Chinese characters on a screen, smaller fonts on pre-Retina screens strain my eyes. The more complex characters (as well as moderately complex ones in bold) are literally just blobs of black pixels and you have to guess from the general shape and context :)
Complaints about the screen resolution are a matter of aesthetics, unless you work on visual digital media.
I strongly disagree here. I don’t notice much difference with images, but the difference in text rendering is huge. Not needing sub-pixel AA (with its associated blurriness) to avoid jagged text is a huge win and improves readability.
Good for you. Your eyesight is much, much better than mine
I’d be pretty surprised by that, my eyesight is pretty terrible. That’s part of why high resolution monitors make such a difference. Blurry text from antialiasing is much harder for me to read and causes eye strain quite quickly. Even if I can’t see the pixels on lower resolution displays, I can’t focus as clearly on the outlines of characters and that makes reading harder.
As an X220 owner, while I concede someone may like the aesthetics of a low-resolution screen, the screen is quite bad in almost all other ways too. But you’re definitely right about aspect ratio. For terminal use a portrait 9:16 screen would be much better than 16:9. Of course external displays are better for ergonomics and nowadays large enough to work in landscape, too.
I was very fond of my X220 and would still use for it if it wasn’t stolen from me, but even at the time the display was disappointing and the trackpad was dreadful. It wouldn’t call it the best, certainly not now.
Most modern laptops have fully functional trackpads with gesture support. Apple’s even have proper haptics, so you can push the whole thing with a uniform click instead of the hinged designs most have. I used to be a TrackPoint diehard, but I don’t miss it after using a MacBook.
While I certainly prefer the ThinkPad TrackPoint over traditional trackpads, I must admit, the MacBook’s trackpad is surprisingly usable with just my thumbs and without my fingers leaving the home row.
Unlike most other laptops, the Thinkpad comes with a superior input device: the trackpoint. It requires less finger movement and it has 3 mouse buttons. That is why many people, including me, simply disable the trackpad.
Since at least the x40 generation (Haswell) it’s all been decent Synaptics multi-touch trackpads. Nothing extraordinary, but nothing bothersome either, more than fine.
the device is still better equipped to handle drops and mishandling compared to that of more fragile devices (such as the MacBook Air or Framework).
In my experience this isn’t true (at least for the Framework), and the post doesn’t provide any proof for this claim.
I’ve owned a ThinkPad X230, which is almost the same as the X220 apart from the keyboard and slightly newer CPU. I currently own a Framework 13. Although I didn’t own them both at the same time, and I also have no proof for the counter-claim, in my experience the Framework is no more fragile than the X230 and I feel equally or more confident treating the Framework as “a device you can snatch up off your desk, whip into your travel bag and be on your way.”
(I remember the first week I had the X230 I cracked the plastic case because I was treating it at the same as the laptop it had replaced, a ThinkPad X61. The X61 really was a tank, there’s a lot to be said for metal outer cases…)
The rugged design and bulkier weight help put my mind at ease - which is something I can’t say for newer laptop builds.
Confidence and security are subjective feelings, so if owning a chunky ThinkPad makes someone feel this way then good for them. Not to mention I think it’s awesome to keep older devices out of e-waste. However, I don’t think there’s any objective evidence that all newer laptops are automatically fragile because they’re thin - that’s perception as well.
However, I don’t think there’s any objective evidence that all newer laptops are automatically fragile because they’re thin - that’s perception as well.
It’s a reasonable null hypothesis that a thicker chassis absorbs more shock before electronic components start breaking or crunching against each other. Maintaining the same drop resistance would require the newer components to be more durable than the older ones, which is the opposite of what I’d expect since the electronics are smaller and more densely packed.
It’s been years since my x220 died, but IMO the trackpad on the framework is leaps and bounds better than the trackpad on the x220. (Though, the one caviat is that the physical click on my framework’s trackpad died which is a shame since I much prefer to have physical feedback for clicking. I really ought to figure out how hard that would be to fix.)
The x220’s keyboard is maybe slightly better, but I find just about any laptop keyboard to be “usable” and nothing more, so I’m probably not the right person to ask.
From my recollection: keyboard of the X230 about the same, trackpad of the Framework better (under Linux).
The X230 switched to the “chiclet” keyboard so it’s considered less nice than the X220 one (people literally swap the keyboard over and mod the BIOS to accept it). I think they are both decent keyboards for modern laptops, reasonable key travel, and don’t have any of the nasty flex or flimsiness of consumer laptop keyboards. But not the absolute greatest, either.
I remember the X230 trackpad being a total pain with spurious touches, palm detection, etc. None of that grief with the Framework, but that might also be seven-ish years of software development.
That’s sad. Despite his claim that he’ll
be alright for many years, the responsible thing would be to transition leadership of all the projects he leads now.
… by being funded via a heavy store tax from an eternally buggy mess of a proprietary app store whose main value add is their network effect, set of sketchy engagement APIs, DRM (as in ‘Digital Rights Management’ or ‘corporate sanctioned malware’ depending on your optics) mainly selling proprietary software.
Its main reasons for ‘contributing’ being part of a risk management strategy for breaking away and more directly competing with Microsoft, empowering specifically those FOSS projects that fits their narrative and promoting an architecture that is close to a carbon copy of its eventual end-game competitor. This time with more anti-cheat making its way should there be sufficient traction.
It is the Android story again on a smaller scale. How did that turn out last time, how many of the ‘contributions’ failed to generalise? or is it different this time because Valve is good because games? Colour me sceptic.
I think Valve as a company has a lot of problems (though the DRM is pretty mild and one of their lesser problems tbh) and the Steam Deck iffier of a product than people make it out to be, but they’re actually being a good citizen here. Yes, they’re funding the things relevant to them out of self-interest (i.e. case-insensitive FS, WaitForMultipleObjects API clone, HDR, Mesa work, etc.), but they’re working with upstreams like the kernel, Mesa, and freedesktop.org to get the work merged upstream, be properly reviewed by the people that work on it, and be usable for everyone else. Android never worked with upstreams until maintaining their own things in an opaquely developed fork became unsustainable.
(Sometimes I think they might be using a bit too much commodity - the Arch base of SteamOS 3 seems weird to me, especially since they’re throwing A/B root at it and using Flatpak for anything user visible…)
You only need mild DRM and DHCP for the intended barrier to entry, suppressive effect and legal instruments, anything more exotic is just to keep the hired blackhats happy.
If I’m going to be a bit more cynical - they are not going the FreeDesktop/Linux instead of Android/Linux route out of the goodness of their hearts as much as they simply lack access to enough capable system engineers of their own and the numbers left after Google then ODMs then Facebook/Meta then all the other embedded shops have had their fill aren’t enough to cover even the configuration management needs.
Take their ‘contributions’ in VR. Did we get even specs for the positioning system? activation / calibration for the HMD? or was that a multi-year expensive reversing effort trying to catch up and never actually being able to just to be able to freely tinker with hardware we payed for? that was for hardware they produced and sold in a time where open source wasn’t a hard sell by any means.
Did we get source code for the ‘Open’VR project that killed off others actually open ones? Nope, binary blob .so:s and headers. Ok, at least they followed the ID software beaten path of providing copyleft version of the iterations of the source engine so people can play around with ports, exploring rendering tech etc? Nope. If you have spotted the source on the ’hubs its because its a version that was stolen/leaked.
Surely the lauded SteamDeck was sufficiently opened and upstreamed into the kernel? Well not if you include the gamepad portions. It’s almost as if the contributions hit exactly that which fit their business case and happens to feed into the intended stack and little to nothing else. Don’t anthropomorphise the lawnmower and all that.
To me, it looks like Valve wanted to make a gaming console, and used Linux as a way to pull that off. If you’d told me 25 years ago that you’d be able to play Windows games on a Linux machine that was handheld, I’d have been blown away. To me it still seems almost miraculous. And they doing this while (as far as I know) fulfilling their obligations to the various licenses used in the Linux ecosystem.
Does the fact they’re doing this for commercial gain invalidate that?
I don’t know what you expected. They’re almost certainly not going to give you the crown jewels used to make the headset, but all the infrastructure work is far more useful as it benefits everyone, not just the people with a niche headset.
They patented and own the hardware, the binary blobs and sold the hardware devices at a hefty price tag. I’d expect for an average FOSS participant to integrate and enforce existing infrastructure and not vertically integrate a side band that locks you into their other products.
I own a Steamdeck for about a year now. No what idea why do people have problem with the size. My kids play on it and don’t complain, and we do have a Nintendo Switch (smaller) to compare. Sure it’s bigger, but I don’t think it’s a big deal. On top of it - with a dock, it works perfectly as a home console system, so the size matters even less.
I really enjoy it and recommend getting it if you’re thinking about it.
I don’t like the size and ergonomics. But I’m in the minority on that one; people with big hands especially seen to love it.
There are more abstract concerns regarding its place as a product (is it a PC or a console? whichever is more convenient to excuse a fault), but otherwise the device is a pretty good value. I just don’t game that much, and when I do, it’s a social thing.
It might have a problem depending on what input method you use. I have average sized hands and I like to use the trackpad for FPSes and strategy games. For the FPS case, reaching the trackpads and reaching the face buttons gets really annoying. You can map the grip buttons to the face buttons, but then you’re losing out there.
Even with big piano friendly hands the steamdeck ergonomics are hard. If you don’t have a big toy budget, testing someone else’s is highly recommended. I mostly use my three steamdecks for various debugging / UI / … experiments (nreal air glasses + dactyls and the deck tucked away somewhere). If Asus would be able to not be Asus for 5 minutes the ROG Ally would’ve been an easy winner for me.
I hacked mine to run Linux as I’ve done with all other devices I’ve used throughout the years, it didn’t boot WIndows once. As far as their ‘intentions’ - whatever laptops I have scattered, all of them came bundled with Windows ‘intended’ for that to be the used OS.
DSDT fixes and kernel config to get the Ally working was less effort than I had to do to get actual access to the controllers and sensors on the Steam Deck.
Fair enough. I admin RHEL for dayjob, and use Arch on my laptop, when getting SD I knew I’ll keep it more appliance-y than getting into fully custom OS. I just wanted something to play Persona 5 in bed.
It’s heavy. Significantly heavy. It took me a while to figure out how to use it in a way that didn’t quickly give me wrist fatigue/pain, and even now it’s not perfect.
Also Valve’s Deck Verified program is very flawed. It’s quite a bit better than nothing, but it’s flawed. The biggest (but not only) problem IMO is that a game that has a control scheme not optimized for controllers - but still fully supports controllers - can be marked Verified. As an example, Civ V and Civ VI both basically just use the trackpad like a mouse, and the other buttons have some random keybinds that are helpful. Now, those are basically keyboard-and-mouse games… so to a certain extent I totally get it. But I should be able to click into a list of things and use the joysticks or D-pad to scroll down the list. I can’t. Instead, I have to use the trackpad to position the cursor over the scroll bar, then hold right trigger, then scroll with my thumb. This is extremely unergonomic.
It’s heavy. Significantly heavy. It took me a while to figure out how to use it in a way that didn’t quickly give me wrist fatigue/pain, and even now it’s not perfect.
Right, it’s really chunky - I might use it more if they had a mini version. The only use for the portability is at home (i.e. on the porch). It’s not small enough I’d want to carry it in a bag if I commute, around town, or waiting for someone, and if I’m on vacation, the last thing I want to do is play video games instead of touch grass or spend time or people. If I really want to play a game, I’ll probably use the laptop I have (even if that restricts choice of game - because I have a Mac…). Again, not that much of a gamer, so it’s different values I guess.
I have normal sized male hands and my girlfriend has relatively small hands and both work very well. She was actually surprised how ergonomic the Steam Deck is given the size. Other than that I only got positive reactions to the ergonomics.
Right now you can install Steam on your regular desktop Linux system, throw in Lutris to get games from other stores and you are good to go. This has been so far the best year to turn your family into Linux users yet.
It is far from ideal, but still a great improvement. And if we manage to get up to – let’s say – 10% penetration in EU, this is going to help immensely to combat mandatory remote attestation and other totalitarian crap we are going to end up with if Microsoft, Apple and Google keep their almost absolute dominance.
I appreciate that both this comment and its parent make good points that are not necessarily in conflict. I would distill this as a call for “critical support” for Valve, to borrow a term from leftist politics.
I have to say that I have had far more luck managing non-Steam game installs within Steam (you can add an entry to it, with a path, and it will manage it as a Proton install if you’d like; you basically just use Steam as a launcher and Proton prefix manager) than via Lutris.
My opinion of Lutris is that it is a janky pile of hacked-together non-determinism which was developed over a long period of time over many, many versions of Wine, and over many, many GPU architectures and standards, and long before Proton existed… which miraculously may work for you, although often will require expert hand-holding. Avoid if you are new to Linux.
Their improvements to proton/wine have made it so I could go from loading windows once a day to play games to loading it once a month to play specific games. Like all other for-profit companies their motives are profit driven, but so far they are contributing in ways that are beneficial and compatible with the broader Linux ecosystem. Unlike Microsoft, Oracle, Google, and Amazon they don’t have incentive to take over a FOSS project, they just don’t want to rely on Windows. But we should always keep an eye out.
Getting games to work by default on Linux also makes it much easier for people interested in Linux to try it out and people not interested to use it when convenient, which is a win in my book.
Did you look at the slides or watch the video of the talk? All their contributions are upstreamed and applicable to more than just their usecase. Everything is available on Arch as well (before SteamOS was released they actually recommend Manjaro, because they are so similar). You can use Proton for the Epic games store or other Windows apps. Of course they are doing this in self interest, but according to Greg Kroah Hartman and a lot of other kernel maintainers this isn’t a bad thing.
The Steam Deck is first „real“ consumer Linux computer which has been sold over a million times. I hope more Linux handhelds are being released in the coming years :)
The obvious irony here is that is in Valve’s best interest for their stuff to be upstreamed. It’s not like they can fork KDE (for example since it’s used by SteamOS) and maintain & support their own fork.
I used to be very against stale bots until a project I started gained some traction.
At first, I was excited for every issue and PR, but as the number of users grew, so did the issues and PRs. Some of them there was just no way I could handle (for example, they only happen on macOS, and I have no way to afford a Mac). I left them open out of respect, but it definitely demoralized me to see issue numbers pile up.
After a certain amount of time, I just burnt out. I wasn’t checking issues or PRs, because I felt that if I replied to one, I should at least have the decency of looking at the others. I mean, these people took time out of their day to contribute to something I made, and I ignore their issue in favor of some other random one just because that’s the one that happened to be at the top of my GH notifications? So I just stopped looking at them, and they kept piling on and on to the point where it was completely unmanageable.
Thankfully I never had to resort to stale bots. Some dedicated users reached out and I made them maintainers, and they’re the ones taking (good!) care of the project now. I even moved it to the nix-community organization so it was clear it was no longer just “my” thing.
Still, I can definitely empathize with those who use stale bots. If I had that set up, I probably wouldn’t have burnt out so quickly. I know it might feel disrespectful to close the issue after some time, but I feel it’s even more disrespectful to just ignore everything completely due to the number of issues. (Similar thing happened recently with lodash, who closed every issue and PR by declaring issue bankruptcy)
More than anything, this highlights how incredibly underbaked GitHub Issues is as a bug tracking platform. There are any number of reasons that an issue may remain open; e.g., it has not yet been triaged, or it has been triaged and found to be low priority (e.g., if you aren’t actively supporting Mac users, perhaps any issue found on a Mac is low priority), or it’s frankly just not super important when compared to fixing critical defects and security issues. It should be possible to focus your attention on just the high priority, actionable issues, without being overwhelmed by the total open issue count.
It’s not unhealthy for a project to have a backlog of issues that may remain open indefinitely, reflecting the fact that those issues are lower in priority than the things you’re getting around to fixing. Closing issues really ought to be reserved for an active refusal to make a particular change, or if something that’s reported is not actually a bug.
I fully agree, it’s also definitely why I never closed any of those issues - I would love for them to get fixed! The first time I actually closed a contribution, it was a PR that contributed a ton of stuff, but it was just so disorganized (it really should have been split up into 15+ PRs) that it would take me weeks to get over all of it. Still, it took me over a month to own up to the fact that I was never going to be able to merge it in that state, and I felt horrible about it at the time. They took a ton of time to make a bunch of contributions to my project! And I just refused them. Still feel kind of sad I couldn’t find a better way to merge that. Long-term, though, it definitely improved my mental health (which was at an all-time-low at the time).
Thanks for sharing this, and I can understand the urge to close those issues, fix them and keep the number down. But I also learned to just live with it. Maybe some people will have to get into the same position first to understand why a maintainer spreads their attention so selective, but they will eventually. The other option is to just give up and then nothing progresses. I’ve had issues where I don’t have the hardware for or time, and eventually someone came around to work on it.
You did the right thing. The difficulty of reviewing a PR scales with something like O(lines^2) * overall complexity of the diff. And if someone is willing to put that much work into a PR, they should be willing to put in the extra time to make it possible to review. Otherwise I’d suspect they were any of several bad characters:
The CV stuffer.
The security researcher looking to prove their pet theory about code review/open source.
The artiste, who will berate anyone too stupid to understand their clearly superior work.
That still has the problem of “I need to go look through every issue” which can be challenging at times (burn-out, ADHD, simply too many issues…).
I think this might actually be one of the most reasonable uses of a stale bot - just mark it as “stale”, but don’t do anything else. That actually signals to the maintainers that these should probably get looked at!
I mean if the stale bot is closing issues without anyone looking at them, then it could be closing issues that are both important and easy to fix. Then any sense of being on top of things that you get from seeing fewer open issues would be false, and worse you are pissing off people who open issues that are closed for no good reason.
I think this might actually be one of the most reasonable uses of a stale bot - just mark it as “stale”, but don’t do anything else. That actually signals to the maintainers that these should probably get looked at!
I don’t see the point. Couldn’t maintainers just sort open issues in chronological order of the most recent activity?
For your example. Wouldn’t it make sense to mark them as macOS and simply ignore them and only review something with code passing tests there?
I think it’s okay to have stuff you cannot work on and I think it’s wrong to assume that you get everything just because you write about it.
Maybe I am overly focusing on that example, but I don’t think this somehow gets solved by a stale bot and you didn’t use one for a reason.
I think that it makes sense to make clear what a user should expect. You can clarify that you won’t support a certain platform, you can clarify that you cannot support it, but accept patches. And you can certainly make clear if you don’t want to use your issue tracker to give support.
It’s fine, it’s not rude. Of course being on a non-supported platform can be a bummer and as someone who often uses more exotic things I hope that doesn’t happen and I am always happy when someone at least would welcome patches.
I think stale bots are rude for the reason you mention. If you have an open bug tracker and I contribute something sensible having a bot close it down doesn’t feel like the nicest thing. Of course there might be “I want this” and “How to do this” issues, especially on GitHub, and they tend to be closed, but that’s not what the stale bot does. It’s worse even, it kind of strengthens that attitude since people who write “I still want this”, “When is this done”, “This is important”, “Your project sucks without this”, etc. will prevent the issue from going stale.
Stale bots don’t seem to be a good solution here, since it feels like they do the opposite of what you want.
Do people feel accomplished when bots close issues for more often than not no reason at all?
And yes, the GitHub issue tracker is bad. People get creative though with templates. Maybe that’s a better way? Maybe another way would be using an external issue tracker.
I agree with what you’re saying, and the quality of issues/PRs improved significantly once I added some basic templates.
Still, while I don’t particularly like stale bots and wouldn’t put them on my projects, I just meant to say I get why people reach for them. Not that I particularly agree with bots who just close issues independent of triage status.
But yes, I think GitHub’s fairly simplistic issue tractor just makes the whole problem a lot worse. But I also think that a project would need to reach a certain scale before an external issue tracker is justified. I mean, I have no idea how I’d submit a bug report to Firefox. Some random library I found on GitHub though? Sure, just open an issue. There’s a lot of value in that.
Is this an argument? Mobile editing is dog shit. It’s just awful top to bottom. I can’t believe we’re 15 years into iOS, and they still don’t have frigging arrow keys let alone actually useable text editing. Almost daily, I try to edit a URL in the mobile Safari and I mutter that every UX engineer at Apple should be fired.
I don’t really know why you’re singling out Safari, when Google/Chrome have a long history of actually trying to get rid of displaying URLs. And it’s been driven not by “UX engineers”, but primarily by their security team.
(and to be perfectly honest, they’re right that URLs are an awful and confusing abstraction which cause tons of issues, including security problems, and that it would be nice to replace them… the problem is that none of the potential replacements are good enough to fill in)
My point is that I’m not aware of Apple, or “UX engineers on the Safari team”, being the driving force behind trying to eliminate URLs, and that we should strive for accuracy when making claims about such things.
Shrug! Android Play Store, the app, does this. Terrifying! It breaks the chain of trust: Reputable app makers link to an url (thankfully, it’s still a website), but you have to use the app anyway to install anything, which has nowhere to paste the url, let alone see it, so you can’t see if you are installing the legit thing or not. Other than trust their search ranking, the best you can do is compare the content by eye with the website (which doesn’t actually look the same).
I’m reluctant to install third-party apps in general, but, when I do, preserving a chain of trust seems possible for me: if I click a link to, say, https://play.google.com/store/apps/details?id=com.urbandroid.sleep on Android, it opens in the Play Store app; and, if I open such a URL in a Web browser (and I’m signed in to Google), there’s a button to have my Android device install the app. Does either of those work for you?
Wow! That did not work in Firefox just one month ago (when I had to install Ruter on my new phone). Now it does. I tried Vivaldi too, and it doesn’t even ask whether I want to open it in Google Play.
Browser devs to the rescue, I guess, but as long as the app isn’t doing their part – linking to the website – the trust only goes one way.
Does it though? I mean, you’ll spend much longer fiddling to get the text right!
If you think “oh this’ll just be a quick reply” and then end up actually typing more than you thought you would, it makes sense to finish the job you started on mobile, which then actually takes more time. Especially when you’re on the go and you have no laptop with you.
It really just means I use the phone for composing conceptually light things because I don’t want to mess with it any more than necessary. (This is likely an adaptation to the current state versus a defense of how it is.)
I don’t miss arrow keys with iOS Trackpad Mode[1]. The regular text selection method is crap, but it works well enough doing it via Trackpad Mode.
I think part of the problem with the iOS Safari URL bar is that Apple tries to be “smart” and modifies the autocorrect behavior while editing the URL, which in my case, ends up backfiring a whole lot. There’s no option to shut it off, though.
Agreed. Just the other day I found the on screen keyboard on my iPad was floating and I couldn’t figure out how to make it full size again without closing the app. A few days later I had the thought to try to “zoom” out on the keyboard with two fingers and it snapped back into place!
As someone more comfortable with a keyboard and mouse, I often look for a button or menu. When I step back and think about how something might be designed touch first, the iOS UX often makes sense. I just wish I had fewer “how did I not know that before!” moments.
I mean, what meaningful way is there to make it discoverable? You can’t really make a button for everything on a phone.
One other commonly unknown “trick” on ios is that clicking the top bar often works as a HOME key on desktops, but again, I fail to see an easy way to “market” it, besides clippy, or some other annoying tutorial.
Actually, the ‘Tips’ app could actually have these listed instead of the regular useless content. But I do think that we really should make a distinction between expert usage and novices and both should be able to use the phone.
I really don’t have an answer to that. I’ve never looked through the Tips app, not have I been very active in reading iOS-related news[1]. Usually I just go along until I find a pain point that’s too much and then I try to search for a solution or, more often, suffer through it.
[1] I do enjoy the ATP podcast, but the episodes around major Apple events are insufferable as each host casually drops $2,000 or more on brand new hardware, kind of belying their everyman image.
The far more frustrating thing on lobste.rs is that the Apple on-screen keyboard has no back-tick button. On a ‘pro’ device (iPad Pro), they have an emoji button but not the thing I need for editing Markdown. I end up having to copy and paste it from the ‘Markdown formatting available’ link. I wish lobste.rs would detect iOS clients and add a button to insert a backtick into the comment field next to the {post,preview,cancel} set.
Long-press on the single-quote key and you should get a popup with grave, acute etc accents. I use the grave accent (the one on the far left) for the backtick character.
Thank you! As someone else pointed out in this thread, iOS is not great for discovery. I tried searching the web for this and all of the advice I found involved copying and pasting.
Oddly enough, I knew about it for entering non-English letters and have used it to enter accents. It never occurred to me that backtick would be hidden under single quote.
This seems super useful, but I’ve spent the last ten minutes trying to get it to
Enter selection mode using 3D touch
Get the trackpad to not start jittering upward or downwards
It seems either that my phone’s touchscreen is old and inaccurate or I am just really dang bad at using these “newfangled” features.
I agree with your other reply - discoverability is atrocious. I learned that you can double/triple tap the back of your phone to engage an option which blew my mind. I wonder what I’m missing out on by not ever using 3D touch…
Samesies. The funniest bit, at least for me, is that I’m usually just trying to remove levels of the path, or just get back to the raw domain (usually because autocomplete is bizarre sometimes). This would be SUCH an easy affordance to provide since URLs already have structure built-in!
You may already know about this, but if you put the cursor in a text field, and then hold down on the space bar, after a second or two you enter a mode that lets you move the cursor around pretty quickly and accurately.
edit: I guess this is the “trackpad mode” mentioned below by /u/codejake
Arthur C Clarke predicted this in The City And The Stars. In its insanely-far-future society there is a dictum that “no machine shall have any moving parts.”
I wish people would be a little pickier about which predictions they implement and maybe skip the ones made in stories with a dystopian setting. Couldn’t we have sticked to nice predictions, like geostationary satellites?
It’s hidden, but… tap url bar, then hold down space and move cursor to where you want to edit. Now normal actions work ( e.g. double tap to select a word).
The trackpad mode works very poorly on the iPhone SE because you can’t move down since there’s no buffer under the space key, unlike the newer phone types. It doesn’t work well for URLs because the text goes off screen to the right, and it moves very slowly. Ironically I’m on an iPad and I just tried to insert “well” into the last sentence and the trackpad mode put the cursor into the wrong place just as I released my tap. It just sucks. This is not a viable text editing method.
I don’t understand the part about there being one convenient post to hide. This story was also a single post, and equally convenient to hide. Is there more to it, or should I just take the “tradition” part as the answer?
Are there any other companies which are excepted from the policy against product announcements?
Apple events tend to generate a ton of followup posts which might or might not be considered topical to this site. If an event does that, those submissions can be folded into the main event post, minimizing annoyance for those that are hiding that post.
Corpspam submissions via press release (like the Mullvad example) are more random. Individual members hiding these is not a strong signal that these are unwelcome here. Removing them via mod action is.
I’m gonna go out on a limb here and say “yes”. If you want to graph the two against each other along a time axis and prove me wrong, knock yourself out.
I would phrase it differently, but I agree. Allowing the Apple announcements is, yeah, sort of a “heckler’s promo” for a popular topic. They’re pretty wide-ranging and shape the direction our field develops in. The Mullvad story was a 180 word press release with negligible technical info. If they wanted to release their image or write a few thousand words about things they learned along the way this multi-year technically-demanding project, it’d be topical and welcome, I’d be upvoting the post.
Fundamentally, I want links that prompt informative, creative discussions in a healthy community. The Apple announcements are fine for that. Small businesses mentioning minor product enhancements almost never do.
Ironically, the Mullvad submission contained a link to an older post where they announced they were going to move to a RAM based Linux distro, and which had a bunch of links to the software they were using. Absolutely on-topic.
Had the removed submission contained that sort of information I would not have flagged it as spam in good conscience.
I will never switch to Wayland and will never port my applications to use it. I’d sooner switch to Windows and just drop Linux support entirely. (Though odds are it won’t come to that, as X actually works really very well despite, or perhaps thanks to, its relative lack of git activity.)
There’s a narrow line between pragmatism and dogmatism, this comment (especially without elaborating on why) seems to veer heavily into the latter. What reasons would you “rather switch to Windows and just drop Linux support entirely”, or do you just enjoy screwing over users who don’t use the technologies you like? Is there anything actionable devs can take from your stances, or are you just venting?
Comments like this do little but spark more vitriol in what is already a contentious debate.
I’m trying not to beat the dead horse - I’ve gone into why in many comments on other threads, and this one is a specific call to action to port things.
Porting things to Wayland is actually an enormous amount of work. Especially for me personally as a user, since it means doing something about my window manager, my beloved taskbar, every little detail of my workflow in addition to my application toolkit…. all for negative benefit; things work worse on Wayland than they already do today on X. And this situation is not likely to change for a long time; indeed, I expect X will continue to actually work adequately for ages.
Open source works best when people act out of their own rational self interest then share the result with others because it costs them near nothing to do so - I made this for me, then shared it with the hopes that it might be useful but without warranty of any kind, etc.
Switching to Wayland costs me a lot. What does it get me? And the constant streams of falsehoods out of Wayland proponents irks me to such a point where I don’t want to give them anything. Maybe if they started telling the truth instead just trying to shove this junk down my throat, I’d give them some slack. But zero respect has been earned.
Your arguments about needing to change your window manager, taskbar, etc. are reasonable (desktop environment users can mostly be transparently migrated over, those of us on tilers had to change a lot), and yes, there’s development costs. But the same can be said about keeping up with updates to, say, core libraries (GTK updates, OpenSSL updates, whatever). Keeping software running as times and tools evolve is always going to be work.
(tangent below)
I will say, “things work worse on Wayland than they already do today on X” is flat-out false for my usecases and something I hear parroted constantly and hasn’t, since 2018 when I switched to Wayland full-time (back when it was beta-ware at best), been true for my usecases at all. I run (sometimes, at least) mixed DPI monitors (i.e. a normal DPI external monitor at 100% scale, but a laptop panel at 125-200% scale), which is something Xorg notoriously can’t handle in a reasonable way (there’s toolkit-level hacks, if I’m willing to only use QT apps, which while I wish I could, Firefox and all Electron apps are GTK). Every time I play with Xorg again to see what folks are on about, I have so much screen tearing and flickering and general glitching that it distracts me from watching videos or playing games. Last I checked (this may have changed), Firefox didn’t support VA-API hardware acceleration for YouTube videos on Xorg, only on Wayland, which directly costs me CPU cycles and thus battery life on portable devices. Xorg (and all window managers I’ve ever used on it) allows a window to fully claim control of the screen in a way the window manager can’t override, so if a game selects the wrong fullscreen resolution, I’m stuck - potentially so stuck that I need to run pkill on the process from a VT after cracking out Ctrl-Alt-F2.
So… I mean, look, I’m glad Xorg works for you, and works for enough other folks that some projects like OpenBSD are devoutly sticking to it. That’s the beauty of open-source, we can use what works for us. But if we’re just sharing anecdotes and throwing words around, there’s my set: Xorg is unbelievably broken to me as an end-user (nevermind as a developer, I’m looking at this solely from the lens of my “normie” usecases). I will accept Wayland’s lack of global key shortcuts and patchy-at-best screensharing abilities (which are slowly improving) over literally any experience Xorg offers these days.
And so your last paragraph about “constant streams of falsehoods” and “shove this junk down my throat” confuse me. What falsehoods? The Wayland folks say it works, and wow, does it ever, as long as I don’t need a few specific workflows. And if I need those, then Xorg is right there, at least until it runs out of maintainers. Why the virtiol and hate? I see tons of truth in the space; maybe read better/less inflammatory articles? Check out the work emersion and a few others are doing while maintaining wlroots. They seem like fairly straight-shooters to me from anything I’ve read.
For what it’s worth, I pick and choose the technologies that work for me. Wayland yes, systemd no, Pipewire yes, Flatpak absolutely the hell not. I understand where your sentiment in that last sentence comes from, but I’m not sure I ascribe it to all of the modern Linux renovation projects. Some (eg. Pipewire) seem quite well thought-out, some (Wayland) seem… uh, maybe under-specified to start and rushed out, but mostly recoverable, some (systemd) I think had far, far too much scope in a single project, and some I just outright disagree with the UX considerations of (Flatpak). Again: I’m glad we get to pick and choose in this niche, to some degree (you can argue this only really exists on certain distribuitions like Gentoo and Void and not be horribly incorrect…)
Xorg (and all window managers I’ve ever used on it) allows a window to fully claim control of the screen in a way the window manager can’t override, so if a game selects the wrong fullscreen resolution, I’m stuck - potentially so stuck that I need to run pkill on the process from a VT after cracking out Ctrl-Alt-F2.
I don’t believe that’s true. For example twm keeps full screen clients in their own window which you can resize and move around.
This would be a pleasant surprise of a thing to learn that I’m wrong about. It might also vary game to game? I seem to recall some games taking exclusive control of the rendering plane, but if a WM is capable of fixing that, then that solves that point to be equal with my Wayland experience (where “fullscreen” is a lie the compositor makes to the client, and I can always Super-F my way back down to a tiled window)
I believe WMs which allow this functionality are not EWMH compliant, but there is nothing forcing a WM to do it one way or another. Try it with twm and see if you experience the pleasant surprise that you hypothesize.
I’m afraid your interlocutor is correct on this point. Under X11, screen locking apps have no special permissions, so it follows that if you screen saver can take over your screen so can any other app. Compliance with EWMH is completely optional for your window manager or any other X11 client.
That being said I’ll still be using X11 until I can’t run a modern web browser or Steam with it and or Debian drops it from stable. X11 is a crazy mess, but it’s the crazy mess I know and love.
You’re right, clients can bypass the window manager using the override-redirect flag. I wonder if games use that flag in practice – full screen programs generally don’t, but games might use it for better performance.
I’ve interacted with the LLVM project only once (an attempt to add a new clang diagnostic), and my experience with Phabricator was a bit painful (in particular, the arcanist tool). Switching to GitHub will certainly reduce friction for (new) contributors.
However, it’s dismaying to see GitHub capture even more critical software infrastructure. LLVM is a huge and enormously impactful project. GitHub has many eggs in their basket. The centralization of so much of the software engineering industry into a single mega-corp-owned entity makes me more than a little uneasy.
There are so many alternatives they could have chosen if they wanted the pull/merge request model. It really is a shame they ended up where they did. I’d love to delete my Microsoft GitHub account just like I deleted my Microsoft LinkedIn account, but the lock-ins all of these projects takes means to participate in open source, I need to keep a proprietary account training on all of our data, upselling things we don’t need, & making a code forge a social media platform with reactions + green graphs to induce anxiety + READMEs you can’t read anymore since it’s all about marketing (inside their GUI) + Sponsors which should be good but they’re skimming their cut of course + etc..
It really is a shame they ended up where they did.
If even 1% of the energy that’s spent on shaming and scolding open-source maintainers for picking the “wrong” infrastructure was instead diverted into making the “right” infrastructure better, this would not be a problem.
Have you used them? They’re all pretty feature complete. The only difference really is alternatives aren’t a social network like Microsoft GitHub & don’t have network effect.
It’s the same with chat apps—they can all send messages, voice/video/images, replies/threads. There’s no reason to be stuck with WhatsApp, Messenger, Telegram, but people do since their network is there. So you need to get the network to move.
The only difference really is alternatives aren’t a social network like Microsoft GitHub & don’t have network effect.
And open-source collaboration is, in fact, a social activity. Thus suggests an area where alternatives need to be focusing some time and effort, rather than (again) scolding and shaming already-overworked maintainers who are simply going where the collaborators are.
Breaking out the word “social” from “social media” isn’t even talking about the same thing. It’s social network ala Facebook/Twitter with folks focusing on how many stars, how green their activity bars are, how flashy their RENDERME.md file is, scrolling feeds, avatars, Explore—all to keep you on the platform. And as a result you can hear anxiety in many developers on how their Microsoft GitHub profile looks—as much as you hear folks obsessing about their TikTok or Instagram comments. That social anxiety should have little place in software.
Microsoft GitHub’s collaboration system isn’t special & doesn’t even offer a basic feature like threading, replying to a inline-code comment via email puts a new reply on the whole merit request, and there are other bugs. For collaboration, almost all of alternatives have a ticketing system, with some having Kanban, & additional features—but even then, a dedicated (hopefully integrated) ticketing system, forum, mailing list, or libre chat option can offer a better, tailored experience.
Suggesting open source dogfood on open source leads to better open source & more contributions rather than allowing profit-driven entities to try to gobble up the space. In the case of these closed platforms you as a maintainer are blocking off an entire part of your community that values privacy/freedom or those blocked by sanctions while helping centralization. The alternatives are in the good-to-good-enough category so there’s nothing to lose and opens up collaboration to a larger audience.
But I’ll leave you with a quote
Choosing proprietary tools and services for your free software project ultimately sends a message to downstream developers and users of your project that freedom of all users—developers included—is not a priority.
In the case of these closed platforms you as a maintainer are blocking off an entire part of your community that values privacy/freedom or those blocked by sanctions while helping centralization.
The population of potential collaborators who self-select out of GitHub for “privacy/freedom”, or “those blocked by sanctions”, is far smaller than the population who actually are on GitHub. So if your goal is to make an appeal based on size of community, be aware that GitHub wins in approximately the same way that the sun outshines a candle.
And even in decentralized protocols, centralization onto one, or at most a few, hosts is a pretty much inevitable result of social forces. We see the same thing right now with federated/decentralized social media – a few big instances are picking up basically all the users.
But I’ll leave you with a quote
There is no number of quotes that will change the status quo. You could supply one hundred million billion trillion quadrillion octillion duodecillion vigintillion Stallman-esque lectures per femtosecond about the obvious moral superiority of your preference, and win over zero users in doing so. In fact, the more you moralize and scold the less likely you are to win over anyone.
If you genuinely want your preferred type of code host to win, you will have to, sooner or later, grapple with the fact that your strategy is not just wrong, but fundamentally does not grasp why your preferences lost.
Some folks do have a sense of morality to the decisions they make. There are always trade offs, but I fundamentally do not agree that the tradeoffs for Microsoft GitHub outweigh the issue of using it. Following the crowd is less something I’m interested in than being the change I & others would like to see. Sometimes I have ran into maintainers who would like to switch but are afraid of if folks would follow them & are then reassured that the project & collaboration will continue. I see a lot of positive collaboration on SourceHut ‘despite’ not having the social features and doing collaboration via email + IRC & it’s really cool. It’s possible to overthrow the status quo—and if the status quo is controlled by a US megacorp, yeah, let’s see that change.
Sometimes I have ran into maintainers who would like to switch but are afraid of if folks would follow them & are then reassured that the project & collaboration will continue.
But this is a misleading statement at best. Suppose that on Platform A there are one million active collaborators, and on Platform B there are ten. Sure, technically “collaboration will continue” if a project moves to Platform B, but it will be massively reduced by doing so.
And many projects simply cannot afford that. So, again, your approach is going to fail to convert people to your preferred platforms.
I don’t see caring about user privacy/freedoms & shunning corporate control as merely a preference like choosing a flavor of jam at the market. And if folks aren’t voicing an opinion, then the status quo would remain.
the social network part is more harmful than good.
I think you underestimate the extent to which social features get and keep people engaged, and that the general refusal of alternatives to embrace the social nature of software development is a major reason why they fail to “convert” people from existing popular options like GitHub.
To clarify, are you saying that social gamification features like stars and colored activity bars are part of the “social nature of software development” which must be embraced?
Assuming they wanted to move specifically to Git & not a different DVCS, LLVM probably would have the resources to run a self-hosted Forgejo instance (what ‘powers’ Codeberg). Forgejo supports that pull/merge request model—and they are working on the ForgeFed protocol which would as a bonus allow federation support which means folks wouldn’t even have to create an account to open issues & participate in merge requests which is a common criticism of these platforms (i.e. moving from closed, proprietary, megacorp Microsoft GitHub to open-core, publicly-traded, VC-funded GitLab is in many ways a lateral move at the present even if self-hosted since an account is still required). If pull/merge request + Git isn’t a requirement, there are more options.
(i.e. moving from closed, proprietary, megacorp Microsoft GitHub to open-core, publicly-traded, VC-funded GitLab is in many ways a lateral move at the present even if self-hosted since an account is still required)
How do they manage to require you to make an account for self-hosted GitLab? Is there a fork that removes that requirement?
Self-hosting GitLab does not require any connection to GitLab computers. There is no need to create an account at GitLab to use a self-hosted GitLab instance. I’ve no idea where this assertion comes from.
One does need an account to contribute on a GitLab instance. There is integration with authentication services.
Alternatively, one could wait for the federated protocol.
In my personal, GitHub-avoiding, experience, I’ve found that using mail to contribute usually works.
One does need an account to contribute on a GitLab instance.
That’s what I meant… account required for the instance. With ForgeFed & mailing lists, no account on the instance is required. But news 1–2 weeks ago was trying to get some form of federation to GitLab. It was likely a complaint about needing to create accounts on all of the self-hosted options.
However, it’s dismaying to see GitHub capture even more critical software infrastructure. LLVM is a huge and enormously impactful project. GitHub has many eggs in their basket. The centralization of so much of the software engineering industry into a single mega-corp-owned entity makes me more than a little uneasy.
I think the core thing is that projects aren’t in the “maintain a forge” business, but the “develop a software project” business. Self-hosting is not something they want to be doing, as you can see by the maintenance tasks mentioned the article.
Of course, then the question is, why GitHub instead of some other managed service? It might be network effect, but honestly, it’s probably because it actually works mostly pretty well - that’s how it grew without a network effect in the first place. (Especially on a UX level. I did not like having to deal with Phabricator and Gerrit last time I worked with a project using those.)
I would not be surprised if GitHub actively courted them as hostees. It’s a big feather in GH’s cap and reinforces the idea that GH == open source development.
I think the move started on our side, but GitHub was incredibly supportive. They added a couple of new features that were deal breakers and they waived the repo size limits.
There is Codeberg & others running Forgejo/Gitea as well as SourceHut & GitLab which are all Git options without needing Microsoft GitHub or self-hosting. There are others for non-Git DVCSs. The Microsoft GitHub UI is slow, breaks all my browser shortcuts, and has upsell ads all throughout. We aren’t limited to if not MicrosoftGitHub then SelfHost.
for signing events and requests to work, matrix expects the json to be in canonical form, except the spec doesn’t actually define what the canonical json form is strictly
I’m astonished by how often this mistake is repeated. I’ve been yelling into the void about it for what feels like an eternity, but I’ll yell once more, here and now: JSON doesn’t define, specify, guarantee, or even in practice reliably offer any kind of stable, deterministic, or (ha!) bijective encoding. Which means any signature you make on a JSON payload is never gonna be sound. You can’t sign JSON.
If you want to enforce some kind of canonicalization of JSON bytes, that’s fine!! and you can (maybe) sign those bytes. But that means that those bytes are no longer JSON! They’re a separate protocol, or type, or whatever, which is subject to the rules of your canonical spec. You can’t send them over HTTP with Content-Type: application/json, you can’t parse them with a JSON parser, etc. etc. with the assumption that the payload will be stable over time and space.
It’s a perfectly lovely canonical form, but it’s not mandatory. JSON parsers will still happily accept any other non-canonical form, as long as it remains spec-compliant. Which means the JSON payloads {"a":1} and { "a": 1 } represent exactly the same value, and that parsers must treat them as equivalent.
If you want well-defined and deterministic encoding, which produces payloads that can be e.g. signed, then you need guarantees at the spec level, like what e.g. CBOR provides. There are others. (Protobuf is explicitly not one!!)
Sure, if you want to parse received JSON payloads and then re-encode them in your canonical form, you can trust that output to be stable. Just as long as you don’t sign the payload you received directly!
Can you say more about Protobuf not guaranteeing a deterministic encoding at the spec level? Is it that the encoding is deterministic for a given library, but this is left to the implementation rather than the spec? Does the spec say something about purposely leaving this open?
When a message is serialized, there is no guaranteed order for how its known or unknown fields will be written. Serialization order is an implementation detail, and the details of any particular implementation may change in the future.
Do not assume the byte output of a serialized message is stable.
By default, repeated invocations of serialization methods on the same protocol buffer message instance may not produce the same byte output. That is, the default serialization is not deterministic.
The implementation is explicitly allowed to have a deterministic order.
At the spec level it’s undefined.
Yes, which is my point — unless you’re operating in a hermetically sealed environment, senders can’t assume anything about the implementation of receivers, and vice versa. You can maybe rely on the same implementation in an e.g. unit test, but not in a running process. The only guarantees that can be assumed in general are those established by the spec.
Across Builds
Exact same thing here — modulo hermetically sealed environments, senders can’t assume anything about the build used by receivers, and vice versa.
That spec should specify Unicode order (Unicode, ASCII, and
UTF-8, UTF-32 all share sorting order), not UTF-16 as UTF-16 has some code points out of order. That was one of the reasons why UTF-8 was created.
Also, we don’t sort JSON object keys for cryptography. Order is inherited from the UTF-8 serialization for verification. Afterward, the object may be be unmarshalled however seen fit. This allows arbitrary order.
One does not sign JSON, one signs a bytearray. That multiple JSON serializations can have the same content does not matter. One could even argue that it’s a feature: the hash of the bytearray is less predictable which makes it more secure.
I do not get the hangup on canonicalization. Just keep the original bytearray with the signature: done.
Lower in this thread a base64 encoding is proposed. Nonsense, just use the bytearray of the message. What the internal format is, is irrelevant. It might be JSON-LD, RDF/XML, Turtle, it does not matter for the validity of the signature. The signature applies to the bytearray: this specific serialization.
Trying to deal with canonicalization is a non-productive intellectual hobby that makes specifications far too long, complex and error prone. It hinders adoption of digital signatures.
A JSON payload (byte array) is explicitly not guaranteed to be consistent between sender and receiver.
What the internal format is, is irrelevant. It might be JSON-LD, RDF/XML, Turtle, it does not matter for the validity of the signature. The signature applies to the bytearray: this specific serialization.
This is very difficult to enforce in practice, for JSON payloads particularly.
Of course a bytearray is consistent. There’s a bytearray. It has a hash. The bytearray can be digitally signed. Perhaps the bytearray can be parsed as a JSON document. That makes it a digitally signed JSON document. It’s very simple.
Data sent from sender to receiver is sent as a bytearray. The signature will remain valid for the bytearray.
Just don’t try to parse and serialize it and hope to get back the same bytearray. That’s a pointless exercise. Why would you do that? If you know it will not work, don’t do it. Keep the bytearray.
What is hard to enforce? When I send someone a bytearray with a digital signature, they can check the signature. If they want to play some convoluted exercise of parsing, normalizing, serializing and hoping for the same bytearray, you can do so, but don’t write such silliness in specifications. It just makes them fragile.
Sending bytearrays is not hard to do, it’s all that computers do. Even in browsers, there is access to the bytearray.
Of course a bytearray is consistent. There’s a bytearray. It has a hash. The bytearray can be digitally signed. Perhaps the bytearray can be parsed as a JSON document. That makes it a digitally signed JSON document. It’s very simple.
If you send that byte array in an HTTP body with e.g. Content-Type: octet-stream, yes — that marks the bytes as opaque, and prevents middleboxes from parsing and manipulating them. But with Content-Type: application/json, it’s a different story — that marks the bytes as representing a JSON object, which means they’re free to be parsed and re-encoded by any middlebox that satisfies the rules laid out by JSON. This is not uncommon, CDNs will sometimes compact JSON as optimizations. And it’s this case I’m mostly speaking about.
I’m not trying to be difficult, or speculating about theoreticals, or looking for any kind of argument. I’m speaking from experience, this is real stuff that actually happens and breaks critical assumptions made by a lot of software.
If you sign a JSON encoding of something, and include the bytes you signed directly alongside the signature as opaque bytes — i.e. explicitly not as a sibling or child object in the JSON message that includes the signature — then no problem at all.
tl;dr: sending signatures with JSON gotta be like {"sig":"XXX", "msg":"XXX"}
Such CDNs would break Subresource Integrity and etag caching. Compression is a much more powerful optimization than removing a bit of whitespace, so it’s broken and inefficient. Changing the content in any way based on a mimetype is dangerous. If a publisher uses a CDN with such features, they should know to disable them when the integrity of the content matters.
I’m sending all my mails with a digital signature (RFC 4880 and 3156). That signature is not applied to a canonicalized form of the mail apart from having standardized line endings. It’s applied to the bytes. Mail servers should not touch the content other than adding headers.
Changing the content in any way based on a mimetype is dangerous.
Dangerous or not, if something says it’s JSON, it’s subject to the rules defined by JSON. A proxy that transforms the payload according to those rules might have to intermediate on lower-level concerns, like Etag (as you mention). But doing so would be perfectly valid.
And it’s not limited to CDNs. If I write a program that sends or receives JSON over HTTP, any third-party middleware I wire into my stack can do the same kind of thing, often without my knowledge.
I’m sending all my mails with a digital signature (RFC 4880 and 3156). That signature is not applied to a canonicalized form of the mail apart from having standardized line endings. It’s applied to the bytes. Mail servers should not touch the content other than adding headers.
Yes, sure. But AFAIK there is no concept of a “mail object” that’s analogous to a JSON object, is there?
Dangerous or not, if something says it’s JSON, it’s subject to the rules defined by JSON.
A digital signature does not apply to JSON. It applies to a bytearray. If an intermediary is in a position to modify the data it transmits and does not pass along a bytearray unchanged, it’s broken for the purpose of passing on data reliably and should not be used.
Canonicalization cannot work sustainably because as soon as it does some new ruleset is thought up by people that enjoy designing puzzles more than creating useful software. Canonicalization has a use when you want to compare documents, but is a liability in the context of digital signatures.
A digital signature is meant to prove that a bytearray was endorsed by an entity with a private key.
If any intermediary mangles the bytearray, the signature becomes useless and the intermediary should be avoided. An algorithm that tries to undo the damage done by broken intermediaries is not the solution. Either the signature matches the bytearray or it does not.
A digital signature does not apply to JSON. It applies to a bytearray.
100% agreement.
If an intermediary is in a position to modify the data it transmits and does not pass along a bytearray unchanged, it’s broken for the purpose of passing on data reliably and should not be used.
Again 100% agreement, which supports my point that you can’t sign JSON payloads, because JSON explicitly does not guarantee that any encoded form will be preserved reliably over any transport!
JSON explicitly does not guarantee that any encoded form will be preserved reliably over any transport!
Citation needed. I can read nothing about this in RFC 8259. Perhaps your observation is a fatalist attitude that springs from working with broken software. Once you allow this for JSON, what’s next? Re-encoding JPEGs, adding tracking watermarks to documents? No transport should modify the payload that it is transporting. If it does, it’s broken.
There is no guarantee about the behavior transports in the JSON RFC 8259. There is also no text that allows serialization to change for certain transports.
Once you allow this for JSON, what’s next? Re-encoding JPEGs, adding tracking watermarks to documents?
Yes, sure. If the payloads are tagged as specific things with defined specs, intermediaries are free to modify them in any way that doesn’t violate the spec. This isn’t my speculation, or fatalism, it’s direct real-world experience.
No transport should modify the payload that it is transporting. If it does, it’s broken.
If you want to ensure that your payload bytes aren’t modified, then you need to make sure they’re opaque. If you want to send such bytes in a JSON payload, you need to mark the payload as something other than JSON, or encode those bytes in a JSON string.
You might be missing the core info about why many signed JSON APIs are trash: they include the signature in the same JSON document as the thing they sign:
The signature is calculated for a JSON serialization of a dict with, in this example, the keys username and message, then the signature key is added to the dict. This modified dict is serialised again and sent over the network.
This means that the client doesn’t have the original byte array. It needs to parse the JSON it was given, remove the signature key, and then serialize again in some way that generates exactly the same bytes, and then it can sign those bytes and validate the message.
The PayPal APIs do the thing you’re thinking of: they generate some bytes (which you can parse to JSON) and provide the signature as a separate value (as an HTTP header, I think).
@peterbourgon’s suggestion also avoids the core issue and additionally protects against middle boxes messing with the bytes (which I agree they shouldn’t do, but they do so 🤷) and makes the easiest way of validating the signature also the correct way.
(If the application developer’s web framework automatically parses JSON then you just know that some of them are going to remove the signature key, reserialise and hash that (I’ve seen several people on GitHub try to do this with the JSON PayPal produces))
The PayPal way is fine, but you then get into the question of how to transmit two values instead of one. You can use HTTP headers or multipart encoding, but now your protocol is tied to HTTP and users need to understand those things as well as JSON. Peter’s suggestion requires users only to understand JSON and some encoding like base64.
A final practical point: webservers sometimes want to consume the request body and throw it away if they can parse it into another format (elixir phoenix does this, for efficiency, they say), so your users may need to provide a custom middleware for your protocol and get it to run before the default JSON middleware, which is likely to be more difficult for them than turning a base64 string back into JSON.
likewise, it really frustrates me. I’m not surprised, just annoyed, because it’s an aspect of things that always gets fixed as an afterthought in cryptography-related standards…
nobody likes ASN.1, especially not the experts in it, but it exists for a reason. text-based serialization formats don’t canonicalize easily and specifying a canonicalization is extra work. even some binary formats, such as protocol buffers, don’t necessarily use a canonical form (varints are the culprit there).
ASN.1 does not help with canonicalization either. It has loads of different wire encodings, e.g. BER, PER. For cryptographic purposes you must use DER, which is BER with extra rules to say which of the many alternative forms in BER must be used, e.g. forbidding encodings of integers with leading zeroes.
I think I remember seeing your name on a GitHub issue conversation, with the same couple of “adversaries” justifying their actions lol.
I distanced myself from that ecosystem both professionally and hobby-wise because I did not like how the tech stack was implemented, and how the governance behaved.
Although most of the bad decisions have been inherited from a rather… peculiar previous leadership.
No. The point is that you get a different base64 string. It makes it obvious that the message was tampered with.
The problem is that when canonicalizing json, there are multiple json byte sequences that can be validated with a given signature.
A bug in canonicalizing may lead to accepting a message that should not have been accepted. For example, you may have duplicate fields. One json parser may take the first duplicate, one may take the last, and if you canonicalized after parsing and passed the message along, now you can inject malicious values:
You may say “but if you follow the RFC, don’t use the stock json libraries that try to make things convenient, and are really careful, you’re protected”. You’d be right, but it’s a tall order.
With base64, there’s only one message that will validate with a given signature (birthday attacks aside). It’s much harder to get wrong.
Well, not exactly. {"a":1} and { "a": 1 } are different byte sequences, and equivalent JSON payloads. But the base64 encodings of those payloads are different byte sequences, and different base64 payloads – base64 is bijective. (Or, at least, some versions of base64.)
Another way to phrase this is that it makes it hard to shoot yourself in the foot. If you get straight JSON over the wire, what do you do? You need to parse it in order to canonicalize it, but your JSON parser probably doesn’t parse it the way you need it to in order to canonicalize it for verification, so now you have to do a bunch of weird stuff to try and parse it yourself, and maybe serialize a canonicalized version again just for verification, etc.
The advantage of using base64 or something like it (e.g. straight hex encoding as mentioned in your sibling comment) is that it makes it obvious that you should stop pretending that you can reasonably sign a format that can’t be treated as “just a stream of bytes” (because you can’t - a signature over a stream of bytes is the only cryptographic primitive we have, so what you’re actually doing by “canonicalizing JSON” is turning the JSON into a stream of bytes, poorly) and just sign something that is directly and solely a stream of bytes.
Edit: the problem with this is that you’ve now doubled your storage cost. The advantage of signing JSON is that you can deserialize, store that in a database alongside the signature, and reconstruct functionally the same thing if you need to retransmit the original message (for example to sync a room up to a newly-joined Matrix server). If you’re signing base64/hex-encoded blobs, you now need to store the original message that was signed, rather than being able to reconstruct it on-the-fly. But a stream of bits isn’t conducive to e.g. database searches, so you still have to store the deserialized version too. Hence: 2x storage.
Another way to phrase this is that it makes it hard to shoot yourself in the foot. If you get straight JSON over the wire, what do you do? You need to parse it in order to canonicalize it,
Even doing that much I would consider to be a success!
One, It’s rare that a canonical form is even defined, and more rare still that it’s defined in a way that’s actually unambiguous. I’m dubious that Matrix’s canonical JSON spec (linked elsewhere) qualifies.
Two, even if you have those rules, it’s rare that I’ve ever seen code that follows them. Usually a project will assume the straight JSON from the wire is canonical, and sign/verify those wire bytes directly. Or, it might parse the wire bytes into a value, but then it will sign/verify the bytes produced by the language default JSON encoder, assuming those bytes will be canonical.
I don’t understand why a distinction between reordering keys and changing whitespace needs to be made. Are they treated differently in the JSON RFC?
equivalent JSON payloads
Equivalent according to whom? The JSON RFC doesn’t define equality.
Are you simply saying that defining a canonical key ordering wouldn’t be sufficient since you’d need to define canonical whitespace too? If so, I don’t understand why it contradicts bdesham’s comment, since they just gave a single example of what base64 doesn’t canonicalize.
I don’t understand why a distinction between reordering keys and changing whitespace needs to be made. Are they treated differently in the JSON RFC?
I didn’t mean to distinguish key order and whitespace. Both are equally and explicitly defined to be arbitrary by the JSON spec.
Equivalent according to whom? The JSON RFC doesn’t define equality.
Let me rephrase: {a":1,"b":2} and {"b":2,"a":1} and { "a": 1, "b": 2 } are all different byte sequences, but represent exactly the same JSON object. The RFC specifies JSON object equality to at least this degree — we’ll ignore stuff like IEEE float precision 😉 If you defined a canonical encoding, your parser would reject non-canonical input, which isn’t permitted by the JSON spec, and means you’re no longer speaking JSON.
The RFC specifies JSON object equality to at least this degree
I don’t think so. At least RFC 8259 doesn’t identify any (!) of those terms. (It can’t for at least two reasons: it doesn’t know how to compare strings, and it explicitly says ordering of kv pairs may be exposed as semantically meaningful to consumers.)
RFC 8259 … explicitly says ordering of kv pairs may be exposed as semantically meaningful to consumers
Where? I searched for “order” and didn’t find anything that would imply this conclusion, AFACT.
Here’s what I did find:
An object is an unordered collection of zero or more name/value pairs, where a name is a string and a value is a string, number, boolean, null, object, or array.
and
JSON parsing libraries … differ as to whether or not they make the ordering of object members visible to calling software. Implementations whose behavior does not depend on member ordering will be interoperable in the sense that they will not be affected by these differences
which to me seems to pretty clearly say that order can’t matter to implementations. Maybe I’m misreading.
JSON is semantically hopeless.
JSON is an encoding format that’s human-readable, basically ubiquitous, and more or less able to express what most people need to express. These benefits hugely outweigh the semantic hopelessness you point out, I think.
Those are the quotes I mean, particularly the latter one:
JSON parsing libraries have been observed to differ as to whether or not they make the ordering of object members visible to calling software. Implementations whose behavior does not depend on member ordering will be interoperable in the sense that they will not be affected by these differences.
Left unsaid is that implementations that do depend on or expose member ordering may not be interoperable in that sense. And we know they are still implementations of JSON because of the first sentence there. (“Left unsaid” in that one can infer that anything goes from the first sentence taken with the contrapositive of the second.) Slightly weaselly language like this exists throughout the RFC, including in areas related to string and number comparison. If I understand correctly, while many of those involved wanted to pin down JSON’s semantics somewhat, they could not reach agreement.
JSON is an encoding format that’s human-readable, basically ubiquitous, and more or less able to express what most people need to express. These benefits hugely outweigh the semantic hopelessness you point out, I think.
You might be right. That “more or less” gives me the heebie-jeebies though, because without semantics, the well-known security and interoperability problems will just keep happening. People never really just use JSON, there’s always some often-unspoken understanding about a semantics for JSON involved. Otherwise they couldn’t communicate at all. (The JSON texts would have to remain uninterpreted blobs.) And where parties differ in the fine detail of that understanding, they will reliably miscommunicate.
Implementations whose behavior does not depend on member ordering will be interoperable in the sense that they will not be affected by these differences.
I read this as supporting my interpretation, rather than refuting it. I read it as saying that implementations must be interoperable (i.e. produce equivalent outcomes) regardless of ordering.
Slightly weaselly language like this exists throughout the RFC, including in areas related to string and number comparison.
Totally agreed! And in these cases, implementations have no choice but to treat the full range of possibilities as possibilities, they can’t make narrower assumptions while still remaining compliant with the spec as written.
Implementations whose behavior does not depend on member ordering […] will not be affected by these differences.
It’s a tautology. If you don’t depend on the ordering, you won’t be affected by the ordering. It doesn’t anywhere say that an implementation must not depend on the ordering.
The wording is very similar to the wording in sections regarding string comparison, which if I understand you correctly, you believe is an underdefined area. From section 8.3:
Implementations that [pick a certain strategy] are interoperable in the sense that implementations will agree in all cases on equality or inequality of two strings
It’s a tautology. If you don’t depend on the ordering, you won’t be affected by the ordering. It doesn’t anywhere say that an implementation must not depend on the ordering.
It says that
An object whose names are all unique is interoperable in the sense that all software implementations receiving that object will agree on the name-value mappings.
Meaning, as long as objects keys are unique, two JSON payloads with the same set of name-value mappings must be “interoperable” (i.e. semantically equivalent JSON objects) regardless of key order or whitespace or etc.
No, it says they’ll agree on the name-value mappings. It doesn’t say anything there about whether they can observe or will agree on the ordering - that’s the purpose of the following paragraph, talking about ordering.
Agreeing on name-value mappings is necessarily order-invariant. If this weren’t the case, then the object represented by {"a":1,"b":2} wouldn’t be interoperable with (i.e. equivalent to) the object represented by {"b":2,"a":1} — which is explicitly not the case.
I put it to you that the RFC does not equate those objects, but says that JSON implementations that choose certain additíonal constraints - order-independence, a method of comparing strings, a method of comparing numbers - not required by the specification will equate those objects.
The RFC is very carefully written to avoid giving an equivalence relation over objects.
I understand “interoperable” to mean “[semantically] equivalent”.
If this weren’t the case, then JSON would be practically useless, AFAICT.
It’s not so complicated. The JSON payloads {"a":1,"b":2} and {"b":2,"a":1} must be parsed by every valid implementation into JSON objects which are equivalent. I hope (!) this isn’t controversial.
I’m no Javascript expert, so there may be details or corner cases at play in this specific bit of code. But, to generalize to pseudocode
const x = `{"a":1,"b":2}`
const y = `{"b":2,"a":1}`
if parse(x) == parse(y) {
log("valid")
} else {
log("invalid")
}
then yes I’d say this is exactly what I mean.
edit: Yeah, of course JS defines == and === and etc. equality in very narrow terms, so those specific operators would say “false” and therefore wouldn’t apply. I’m referring to semantic equality, which I guess is particularly tricky in JS.
I understand “interoperable” to mean “[semantically] equivalent”. If this weren’t the case, then JSON would be practically useless, AFAICT
Exactly! Me too. I’m saying that every example of interoperability the spec talks about is couched in terms of “if your implementation chooses to do this, …”, i.e. adherence to the letter of the spec alone isn’t enough to get that interoperability. And the practical uselessness - yes, that’s what I believe. It’s fine when parties explicitly contract into a semantics overlaying the syntax of the RFC but all bets are off in cases of middleboxes, databases, query languages etc as far as the standard is concerned.
The JSON payloads {“a”:1,“b”:2} and {“b”:2,“a”:1} must be parsed by every valid implementation into JSON objects which are equivalent. I hope (!) this isn’t controversial.
This is of course a very sensible position, but it goes beyond the requirements of the RFC.
A nitpick - if we wrote an encoding of a map as [[“a”,1],[“b”,2]] and another with the elements swapped I hope we should agree that the two lists contain the same set of name value mappings. Agreeing on the mappings when keys are disjoint (as required by the spec) is a different relation than equivalence of terms (carefully not defined by the spec), is what I’m trying to say.
if we wrote an encoding of a map as [[“a”,1],[“b”,2]] and another with the elements swapped I hope we should agree that the two lists contain the same set of name value mappings.
No, why would they? A name/value mapping clearly describes key: value pairs in an object, e.g. {"name":"value"}, nothing else.
Maps (objects) are unordered by definition; arrays (lists, etc.) are ordered by definition. [["a",1],["b",2]] and [["b",2],["a",1]] are distinct; {"a":1,"b":2} and {"b":2,"a":1} are equivalent.
They should be equivalent, on that we agree; but the standard on its own does not establish their equivalence. It explicitly allows for them to be distinguished.
The RFC says that implementations must parse {"a":1,"b":2} and {"b":2,"a":1} to values which are interoperable. Of course implementations can keep the raw bytes and use them to differentiate the one from the other on that basis, but that’s unrelated to interoperability as expressed by the RFC. You know this isn’t really an interesting point to get into the weeds on, so I’ll bow out.
edit: that’s from
An object whose names are all unique is interoperable in the sense that all software implementations receiving that object will agree on the name-value mappings.
Yeah, something like this is necessary, but unfortunately there are multiple base64 encoding schemes 🥲 I like straight up hex encoding for this reason. No ambiguity, and not really that much bigger than base64, especially given that this stuff is almost always going through a gzipped HTTP pipe, anyway.
I’ve done a lot of work in the area of base conversion (for example).
For projects implementing a base 64, we suggest b64ut which is shorthand for RFC 4648 base 64 URI canonical with padding truncated.
Base 64 is ~33% smaller than Hex. That savings was the chief motivating factor for Coze to migrate away from the less efficient Hex to base64. To address the issues with base 64, the stricter b64ut was defined.
b64ut (RFC 4648 base 64 URI canonical with padding truncated) is:
RFC 4648 uses bucket conversion and not iterative divide by radix conversion.
The RFC specifies two alphabets, URI unsafe and URI safe, respectively: ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/ and ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/. b64ut uses the safe alphabet.
2.1. On a tangent, the RFC’s alphabets are “out of order”. A more natural order, from a number perspective but also an ASCII perspective, is to start with 0, so e.g. 0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz would have been a more natural alphabet, regardless, one of the two RFC’s alphabet is employed by b64ut. I use the more natural alphabet for all my bases when not using RFC base 64.
3. b64ut does not use padding characters, but since the encoding method adds padding, they are subsequently “truncated”.
4. b64ut uses canonical encoding. There is only a single valid canonical encoding and decoding, and they align. For example, non-canonical systems may interpret hOk and hOl as the same value. Canonical decoding errors on the non-canonical encoding.
There multiple RFC 4648 encoding schemes, and RFC 4648 only uses a single conversion method that we’ve termed a “bucket conversion” method. There is also the natural base conversion, which is produced by the “iterative divide by radix” method. Thankfully, natural and bucket conversion align when “buckets” (another technical term) are full and alphabets are in order. Otherwise, it does not align and encodings are mismatched.
I made a tool to play play with natural base conversions and the RFC is avaiable under the “extras” tab. https://convert.zamicol.com
Again, UTF-8 doesn’t guarantee what you’re suggesting, here. UTF-8 guarantees properties of individual runes (characters), not anything about the specific order of runes in a string.
Yes, for individual characters (runes). And a UTF-8 string is a sequence of zero or more valid UTF-8 characters (runes). But the order of those runes in a string not relevant to the UTF-8 validity of that string.
The jump from UTF-8 to JSON is where some order information may be considered to be lost in the narrow scope of object keys, while acknowledging all the rest of JSON is still ordered, including the explicitly ordered arrays.
Order information is present and is passed along this abstraction chain. Order information can only be considered absent after the UTF-8 abstraction layer. At the UTF-8 layer, all relevant order information is fully present.
This isn’t true in the sense that you mean. UTF-8 is an encoding format that guarantees a valid series of ordered bytes for individual characters (i.e. runes) — it doesn’t guarantee anything about the order of valid runes in a sequence of valid runes (i.e. a string).
At the UTF-8 layer, all relevant order information is fully present.
Within each individual character (rune), yes. Across multiple characters (runes) that form the string, no. That a string is UTF-8 provides guarantees about individual elements of that string only, it doesn’t provide any guarantee about the string as a whole, beyond that each element of the string is a valid UTF-8 character (rune).
Sending JSON payload bytes {"a":1}” does not guarantee the receiver will receive bytes {"a":1} exactly, they can just as well receive { "a": 1 } and the receiver must treat those payloads the same.
edit: This sub-thread is a great example of what I meant in my OP, for the record 😞
UTF-8 is a series of ordered bytes. UTF-8 contains order information by definition.
That is the point: Order is present for UTF-8. Only after UTF-8 can order information finally start to be subtracted. Omitting order information at the UTF-8 abstraction layer is against UTF-8’s specification and is simply not permitted. Order information can only be subtracted after UTF-8.
JSON, by specification, marshals to and from UTF-8. In the very least, we have to acknowledge order information is available at the UTF-8 layer even if it is subtracted for JSON objects.
UTF-8 is an encoding for individual characters (runes). It defines a set of valid byte sequences for valid runes, and contains order information for the bytes comprising those valid runes. It does not define or guarantee or assert any kind of order information for strings, except insofar as a UTF-8 string is comprised of valid UTF-8 runes.
That JSON marshals to a UTF-8 encoded byte sequence does not mean that UTF-8 somehow enforces the order of all of the bytes in that byte sequence. Bytes in individual runes, yes; all the bytes in the complete byte sequence, no.
Order is present for UTF-8. Only after UTF-8 can order information finally start to be subtracted. Omitting order information at the UTF-8 abstraction layer is against UTF-8’s specification and is simply not permitted. Order information can only be subtracted after UTF-8.
I’m not sure what this means. UTF-8 asserts “order information” at the level of individual runes, not complete strings.
In the very least, we have to acknowledge order information is available at the UTF-8 layer even if it is subtracted for JSON objects.
UTF-8 does not provide any order information which is relevant to JSON payloads, except insofar that JSON payloads can reliably assume their keys and values are valid UTF-8 byte sequences.
If UTF-8 was not ordered, the letters in this sentence would be out of order as this sentence itself is encoded in UTF-8.
UTF-8 by definition is ordered. This is a fundamental aspect of UTF-8. There’s nothing simpler that can be said because fundamental properties are the simplest bits of truth: UTF-8 is ordered. UTF-8 strings are a series of ordered bytes.
UTF-8 is a string. Order is significant for all strings. All strings are a series of ordered bytes.
UTF-8 does not provide any order information which is relevant to JSON payloads
Yes, it has order information.
JSON inherits order, especially arrays, from the previous abstraction layer, in this case, UTF-8. If this were not the case, how is order information known to JSON arrays, which are ordered? Where is the order information inherited from if not from the previous abstraction layer?
Edit:
UTF-8 asserts “order information” at the level of individual runes, not complete strings.
That is incorrect. UTF-8 by definition is a series of ordered bytes, which is the definition of a string. UTF-8 already exists in that paradigm. It does not need to further confine a property it already inherits. UTF-8 is a string encoding format.
UTF-8 is a variable-length character encoding standard used for electronic communication.
—
JSON inherits order, especially arrays, from the previous abstraction layer, in this case, UTF-8. If this were not the case, how is order information known to JSON arrays, which are ordered? Where is the order information inherited from if not from the previous abstraction layer?
The order of JSON arrays is part of the JSON specification. It’s completely unrelated to how JSON objects are marshaled to bytes, whether that’s in UTF-8 or any other encoding format.
Is the order of fields in a CSV file “inherited from” the encoding of that file?
—
If UTF-8 was not ordered, the letters in this sentence would be out of order as this sentence itself is encoded in UTF-8.
At this point I’m not sure how to respond in a way that will be productive. Apologies, and good luck.
Is in the context of strings. JSON doesn’t define UTF-8 as it’s encoding format for a single character. JSON defines UTF-8 as the character encoding format for strings. Strings are ordered. The entirety of UTF-8 is defined in the context of string encoding.
The order of JSON arrays is part of the JSON specification
When parsing a JSON array, where is the array’s order information known from? Of course, the source string contains the order. JSON parsers must store this order information for array as required by the spec. JSON inherits order from the incoming string.
JSON defines arrays as ordered, and objects as unordered. The specific order of array elements in a JSON payload is meaningful (per the spec) and is guaranteed to be preserved, but the specific order of object keys is not meaningful and is not guaranteed to be preserved.
When JSON is unmarshalled from a string, where does an array’s order information come from? Does it come from the incoming string?
Yes, it does. But the important detail here is that JSON arrays have an ordering, whereas JSON maps don’t have an ordering. So when you encode (or transcode) a JSON payload, you have to preserve the order of values in arrays, but you don’t have to preserve the order of keys in objects.
If you unmarshal the JSON payload {"a":[1,2]} to some value x, and the JSON payload {"a":[2,1]} to some value y of the same type, then x != y. But if you unmarshal the JSON payload {"a":1,"b":2} to some value x, and the JSON payload {"b":2,"a":1} to some value y of the same type, then x == y.
Coze models the Pay field as a json.RawMessage, which is just the raw bytes as received. It also produces hashes over those bytes directly. But that means different pay object key order produces different hashes, which means key order impacts equivalence, which is no bueno.
You can’t have it both ways. You can’t argue for JSON being both the pure abstract form and also a concrete string. JSON is not a string, JSON is an abstraction that’s serialized into a string; I agree with that. The abstract JSON is parsed from a concrete string, and strings carry order information. Obviously JSON is inheriting order from the abstraction layer above, which in this case is string (ITF-8). The order is there as shown arrays being ordered.
When JSON is parsed from UTF-8, it is now in an abstract JSON form. When it’s serialized into UTF-8, it’s not the abstract JSON, it is now a string.
It’s not both. I don’t see any issue categorizing JSON as a pure abstraction, however, the abstraction is solidified when serialized.
JOSE, Matrix, Coze, PASETO all use UTF-8 ordering, and not only does it work well, but it is idiomatic.
These tools do not verify or sign JSON, it signs and verifies strings, a critical distinction. After that processing, it may then be interpreted into JSON. These tools are a logical layer around JSON, and the JSON these tools processes, is JSON. In the example of Coze, not all JSON is Coze, but all Coze is JSON. That’s a logical hierarchy without hint of logical conflict. As I like to say, that makes too much sense.
I fully acknowledge your “JSON objects are unordered” standpoint, but after all this time I have no hesitation saying it’s without merit. Even if that’s were the case, in that viewpoint these tools are not signing JSON, they’re signing strings. All cryptographic primitives sign strings, not abstract unserialized formats. And that too is no problem, far better, JSON defines the exact serialization format. That’s the idiomatic bridge permitting signing. It’s logical, idiomatic, ergonomic, it works, but most of all, it’s pragmatic.
If JSON said in it’s spec, “JSON is an abstract data format that prohibits serialization” this would be a problem. But what use would be such a tool?
If JSON said, “JSON objects are unordered and the JSON spec prohibits any order information being transmitted in its serialized form” that too would be a problem, but why would it ever have such a silly prohibition? To say, “can’t sign JSON because it’s unordered” is exactly that silly prohibition.
When JSON is parsed from UTF-8, it is now in an abstract JSON form. When it’s serialized into UTF-8, it’s not the abstract JSON, it is now a string. It’s not both. I don’t see any issue categorizing JSON as a pure abstraction, however, the abstraction is solidified when serialized.
My understanding of your position is: if user A serializes a JSON object to a specific sequence of (let’s say UTF-8 encoded) bytes (or, as you say, a string) and sends those bytes to user B, then — no matter how they are sent — the bytes that are received by B can be safely assumed to be identical to the bytes that were sent by A.
Is that accurate?
–
This assumption is true most of the time, but it’s not true always. How the bytes are sent is relevant. Bytes are not just bytes, they’re interpreted at every step along the way, based on one thing or another.
If JSON serialized bytes are sent via a ZeroMQ connection without annotation, or over raw TCP, or whatever, then sure, it’s reasonable to assume they are opaque and won’t be modified.
But if they’re sent as the body of an HTTP request with a Content-Type of application/json, then those bytes are no longer opaque, they are explicitly designated as JSON, and that changes the rules. Any intermediary is free to transform those bytes in any way which doesn’t violate the JSON spec and results in a payload which represents an equivalent abstract JSON object.
These transformations are perfectly valid and acceptable and common, and they’re effectively impossible to detect or prevent by either the sender or the receiver.
–
JOSE, Matrix, Coze, PASETO all use UTF-8 ordering, and not only does it work well, but it is idiomatic.
The JSON form defined by JOSE represents signed/verifiable payloads as base64 encoded strings in the JSON object, not as JSON objects directly. This is a valid approach which I’m advocating for.
Matrix says
Signing an object … requires it to be encoded … using Canonical JSON, computing the signature for that sequence and then adding the signature to the original JSON object.
Which means signatures are not made (or verified) over the raw JSON bytes produced by a stdlib encoder or received from the wire. Instead, those raw wire bytes are parsed into an abstract JSON object, that object is serialized via the canonical encoding by every signer/verifier, and those canonical serialized bytes are signed/verified. That’s another valid approach that I’m advocating for.
The problem is when you treat the raw bytes from the wire as canonical, and sign/verify them directly. That isn’t valid, because those bytes are not stable.
Coze speaks to Coze. Coze is JSON, JSON is not necessarily Coze. Coze is a superset, not a subset. Coze explicitly says that if a JSON parser ignores Coze, and does an Coze invalid transformation, that coze may be invalid.
The JSON form defined by JOSE represents signed/verifiable payloads as base64 encoded strings in the JSON object,
Incorrect. There’s no logical difference between encoding to UTF-8 or base 64.
This exactly is the mismatch. Since “JSON objects don’t define order” any JWT implementation may serialize payloads into any order. Base 64 isn’t a magic fix for this.
Of course, all implementations serialize into an order. That’s what serialization does by definition. And it doesn’t matter what the serialization encoding is, by definition, any serialization performs exactly this operation.
It’s so obvious, so foundational, so implicitly taken from granted, that fact is being overlooked.
Yeah, it’s probably as good as it gets. I guess I still need to sort maps manually, and be careful which types I use, in order to get the same output for equivalent input data, but I might be misremembering things. I’ll have another look at the details, I remember that dag-cbor was pretty close to what I needed when I looked last time, but it only allows a very limited set of types.
It’s really hard! Bijectivity itself is easy, just take the in-memory representation of a value, dump the bytes to a hex string, and Bob’s your uncle. But that assumes two things (at least) which probably aren’t gonna fly.
First, that in-memory representation is probably only useful in the language you produced it from — and maybe even the specific version of that language you were using at the time. That makes it impractical to do any kind of SDK in any other language.
Second, if you extend or refactor your type in any way, backwards compatibility (newer versions can use older values) requires an adapter for that original type. Annoying, but feasible. But forwards compatibility (older versions can use newer values) is only possible if you plan for it from the beginning.
There are plenty of serialization formats which solve these problems: Thrift, Protobuf, Avro, even JSON (if you squint), many others. But throw in bijective as another requirement, and I think CBOR is the only one that comes to mind. I would love to learn about some others, if anyone knows of some!
But it’s a properly hard problem. So hard, in fact, that any security-sensitive projects worth its salt will solve it by not having it in the first place. If you produce the signed (msg) bytes with a stable and deterministic encoder, and — critically — you send those bytes directly alongside the signature (sig) bytes as values in your messages, then there’s no ambiguity about which bytes have been signed, or which bytes need to be verified. Which means you can use whatever encoder you want for the messages themselves — JSON can re-order fields, insert or remove whitespace between elements, etc., but it can’t change the value of a (properly-encoded) string. And because you don’t need to decode the msg bytes in order to verify the sig, you don’t need full bijectivity, in either encoder.
Thanks!, This looks quite interesting! I’ll have a play with the Rust bindings and see what it can do. I haven’t looked in detail yet, but it looks like it plugs into serde, so it should be easy and cheap to try it out.
Looks like the beginning of the end for the unnecessarey e-waste provoked by companies forcing obselence and anti-consumer patterns made possible by the lack of regulations.
It’s amazing that no matter how good the news is about a regulation you’ll always be able to find someone to complain about how it harms some hypothetical innovation.
Sure. Possibly that too - although I’d be mildly surprised if the legislation actually delivers the intended upside, as opposed to just delivering unintended consequences.
And just to be clear: the unintended consequences here include the retardation of an industry that’s delivered us progress from 8 bit micros with 64KiB RAM to pervasive Internet and pocket supercomputers in one generation.
Edited to add: I run a refurbished W540 with Linux Mint as a “gaming” laptop, a refurbished T470s with FreeBSD as my daily driver, a refurbished Pixel 3 with Lineage as my phone, and a PineTime and Pine Buds Pro. I really do grok the issues with the industry around planned obsolescence, waste, and consumer hostility.
I just still don’t think the cost of regulation is worth it.
I’m a EU citizen, and I see this argument made every single time the EU passes a new legislation affecting tech. So far, those worries never materialized.
I just can’t see why having removeable batteries would hinder innovation. Each company will still want to sell their prducts, so they will be pressed to find creative ways to have a sleek design while meeting regulations.
Do you think Apple engineers are not capeable of designing AirPods that have a removeable battery? The battery is even in the stem, so it could be as simple as having the stem be de-tacheable. It was just simpler to super-glue everything shut, plus it comes with the benefit of forcing consumers to upgrade once their AirPods have unusable battery life.
Also, if I’m not mistaken it is about service-time replaceable battery, not “drop-on-the-floor-and-your-phone-is-in-6-parts” replaceable as in the old times.
In the specific case of batteries, yep, you’re right. The legislation actually carves special exception for batteries that’s even more manufacturer-friendly than other requirements – you can make devices with batteries that can only be replaced in a workshop environment or by a person with basic repair training, or even restrict access to batteries to authorised partners. But you have to meet some battery quality criteria and a plausible commercial reason for restricting battery replacement or access to batteries (e.g. an IP42 or, respectively, IP67 rating).
Yes, I know, what about the extra regulatory burden: said battery quality criteria are just industry-standard rating methods (remaining capacity after 500 and 1,000 cycles) which battery suppliers already provide, so manufacturers that currently apply the CE rating don’t actually need to do anything new to be compliant. In fact the vast majority of devices on the EU market are already compliant, if anyone isn’t they really got tricked by whoever’s selling them the batteries.
The only additional requirements set in place is that fasteners have to be resupplied or reusable. Most fasteners that also perform electrical functions are inherently reusable (on account of being metallic) so in practice that just means, if your batteries are fastened with adhesive, you have to provide that (or a compatible) adhesive for the prescribed duration. As long as you keep making devices with adhesive-fastened batteries that’s basically free.
i.e. none of this requires any innovation of any kind – in fact the vast majority of companies active on the EU market can keep on doing exactly what they’re doing now modulo exclusive supply contracts (which they can actually keep if they want to, but then they have to provide the parts to authorised repair partners).
Man do I ever miss those days though. Device not powering off the way I’m telling it to? Can’t figure out how to get this alarm app to stop making noise in this crowded room? Fine - rip the battery cover off and forcibly end the noise. 100% success rate.
You’re enjoying those ubiquitous “This site uses cookies” pop-ups, then?
Do you think Apple engineers are not capeable of designing AirPods that have a removeable battery?
Of course they’re capable, but there are always trade-offs. I am very skeptical that something as tiny and densely packed as an AirPod could be made with removeable parts without becoming a lot less durable or reliable, and/or more expensive. Do you have the hardware/manufacturing expertise to back up your assumptions?
I don’t know where the battery is in an AirPod, but I do know that lithium-polymer batteries can be molded into arbitrary shapes and are often designed to fill the space around the other components, which tends to make them difficult or impossible to remove.
You’re enjoying those ubiquitous “This site uses cookies” pop-ups, then?
Those aren’t required by law; those happen when a company makes customer-hostile decisions and wants to deflect the blame to the EU for forcing them to be transparent about their bad decisions.
Huh? Using cookies is “user-hostile”? I mean, I actually remember using the web before cookies were a thing, and that was pretty user-unfriendly: all state had to be kept in the URL, and if you hit the Back button it reversed any state, like what you had in your shopping cart.
I can’t believe so many years later people still believe the cookie law applies to all cookies.
Please educate yourself: the law explicitly applies only to cookies used for tracking and marketing purposes, not for funcional purposes.
The law also specified that the banner must have a single button to “reject all cookies”, so any website that ask you to go trought a complex flow to reject your consent is not compliant.
It requires consent for all but “strictly necessary” cookies. According to the definitions on that page, that covers a lot more than tracking and marketing. For example “ choices you have made in the past, like what language you prefer”, or “statistics cookies” whose “ sole purpose is to improve website function”. Definitely overreach.
FWIW this regulation doesn’t apply to the Airpods. But if for some reason it ever did, and based on the teardown here, the main obstacle for compliance is that the battery is behind a membrane that would need to be destroyed. A replaceable fastener that would allow it to be vertically extracted, for example, would allow for cheap compliance. If Apple got their shit together and got a waterproof rating, I think they could actually claim compliance without doing anything else – it looks like the battery is already replaceable in a workshop environment (someone’s done it here) and you can still do that.
(But do note that I’m basing this off pictures, I never had a pair of AirPods – frankly I never understood their appeal)
Sure, Apple is capable of doing it. And unlike my PinePhone the result would be a working phone ;)
But the issue isn’t a technical one. It’s the costs involved in finding those creative ways, to hiring people to ensure compliance, and especially to new entrants to the field.
It’s demonstrably untrue that the costs never materialise. Speak to business owners about the cost of regulatory compliance sometime. Red tape is expensive.
Is that a trick question? The alternative is not regulating, and it’s delivered absolutely stunning results so far. Again: airgapped 8 bit desk toys to pocket supercomputers with pervasive Internet in a generation.
Edited to add: and this isn’t a new problem they’re dealing with; Apple has been pulling various customer-hostile shit moves since Jobs’ influence outgrew Woz’s:
But once again, Steve Jobs objected, because he didn’t like the idea of customers mucking with the innards of their computer. He would also rather have them buy a new 512K Mac instead of them buying more RAM from a third-party.
Edited to add, again: I mean this without snark, coming from a country (Australia) that despite its larrikin reuptation is astoundingly fond of red tape, regulation, conformity, and conservatism. But I think there’s a reason Silicon Valley is in America, and not either Europe or Australasia, and it’s cultural as much as it’s economic.
It’s not just that. Lots of people have studied this and one of the key reasons is that the USA has a large set of people with disposable income that all speaks the same language. There was a huge amount of tech innovation in the UK in the ’80s and ’90s (contemporaries of Apple, Microsoft, and so on) but very few companies made it to international success because their US competitors could sell to a market (at least) five times the size before they needed to deal with export rules or localisation. Most of these companies either went under because US companies had larger economies of scale or were bought by US companies.
The EU has a larger middle class than the USA now, I believe, but they speak over a dozen languages and expect products to be translated into their own locales. A French company doesn’t have to deal with export regulations to sell in Germany, but they do need to make sure that they translate everything (including things like changing decimal separators). And then, if they want to sell in Spain, they need to do all of that again. This might change in the next decade, since LLM-driven machine translation is starting to be actually usable (helped for the EU by the fact that the EU Parliament proceedings are professionally translated into all member states’ languages, giving a fantastic training corpus).
The thing that should worry American Exceptionalists is that the middle class in China is now about as large as the population of America and they all read the same language. A Chinese company has a much bigger advantage than a US company in this regard. They can sell to at least twice as many people with disposable income without dealing with export rules or localisation than a US company.
That’s one of the reasons but it’s clearly not sufficient. Other countries have spent up on taxpayer’s purse and not spawned a silicon valley of their own.
But they failed basically because of the Economic Calculation Problem - even with good funding and smart people, they couldn’t manufacture worth a damn.
Money - wherever it comes from - is an obvious prerequisite. But it’s not sufficient - you need a (somewhat at least) free economy and a consequently functional manufacturing capacity. And a culture that rewards, not kills or jails, intellectual independence.
Silicon Valley was born through the intersection of several contributing factors, including a skilled science research base housed in area universities, plentiful venture capital, permissive government regulation, and steady U.S. Department of Defense spending.
Government spending tends to help with these kind of things. As it did for the foundations of the Internet itself. Attributing most of the progress we had so far to lack of regulation is… unwarranted at best.
Besides, it’s not like anyone is advocating we go back in time and regulate the industry to prevent current problems without current insight. We have specific problems now that we could easily regulate without imposing too much a cost on manufacturers: there’s a battery? It must be replaceable by the end user. Device pairing prevents third party repairs? Just ban it. Or maybe keep it, but provide the tools to re-pair any new component. They’re using proprietary connectors? Consider standardising it all to USB-C or similar. It’s a game of whack-a-mole, but at least this way we don’t over-regulate.
Beware comrade, folks will come here to make a slippery slope arguments about how requiring battery replacements & other minor guard rails towards consumer-forward, e-waste-reducing design will lead to the regulation of everything & fully stifle all technological progress.
What I’d be more concerned is how those cabals weaponize the legislation in their favor by setting and/or creating the standards. I look at how the EU is saying all these chat apps need to quit that proprietary, non-cross-chatter behavior. Instead of reverting their code to the XMPP of yore, which is controlled by a third-party committee/community, that many of their chats were were designed after, they want to create a new standard together & will likely find a way to hit the minimum legal requirements while still keeping a majority of their service within the garden or only allow other big corporate players to adapt/use their protocol with a 2000-page specification with bugs, inconsistencies, & unspecified behavior.
’s a game of whack-a-mole, but at least this way we don’t over-regulate.
Whack enough moles and over-regulation is exactly what you get - a smothering weight of decades of incremental regulation that no-one fully comprehends.
One of the reason the tech industry can move as fast as it does is that it hasn’t yet had the time to accumulate this - or the endless procession of grifting consultants and unions that burden other industries.
It isn’t exactly what you get. You’re not here complaining about the fact that your mobile phone electrocutes you or gives you RF burns of stops your TV reception - because you don’t realise that there is already lots of regulation from which you benefit. This is just a bit more, not the straw-man binary you’re making out it to be.
I am curious however: do you see the current situation as tenable? You mention above that there are anti-consumerist practices and the like, but also express concern that regulation will quickly slippery slope away, but I am curious if you think the current system where there is more and more lock in both on the web and in devices can be pried back from those parties?
The alternative is not regulating, and it’s delivered absolutely stunning results so far.
Why are those results stunning? Is there any reason to think that those improvements were difficult in the first place?
There are a lot of economic incentives, and it was a new field of science application, that has benefited from so many other fields exploding at the same time.
It’s definitely not enough to attribute those results to the lack of regulation. The “utility function” might have just been especially ripe for optimization in that specific local area, with or without regulations.
Now, we see monopolies appearing again and associated anti-consumer decisions to the benefit of the bigger players. This situation is well-known – tragedy of the common situations in markets is never fixed by the players themselves.
Your alternative of not doing anything hinges on the hope that your ideologically biased opinion won’t clash with reality. It’s naive to believe corporations not to attempt to maximize their profits when they have an opportunity.
Is that a trick question? The alternative is not regulating, and it’s delivered absolutely stunning results so far. Again: airgapped 8 bit desk toys to pocket supercomputers with pervasive Internet in a generation.
This did not happen without regulation. The FCC exists for instance. All of the actual technological development was funded by the government, if not conducted directly by government agencies.
As a customer, I react to this by never voluntarily buying Apple products. And I did buy a Framework laptop when it first became available, which I still use. Regulations that help entrench Apple and make it harder for new companies like Framework to get started are bad for me and what I care about with consumer technology (note that Framework started in the US, rather than the EU, and that in general Europeans immigrate to the US to start technology companies rather than Americans immigrating to the EU to do the same).
As a customer, I react to this by never voluntarily buying Apple products.
Which is reasonable. Earlier albertorestifo spoke about legislation “forc[ing] their hand” which is a fair summary - it’s the use of force instead of voluntary association.
(Although I’d argue that anti-circumvention laws, etc. prescribing what owners can’t do with their devices is equally wrong, and should also not be a thing).
The problem with voluntary association is that most people don’t know what they’re associating with when they buy a new product. Or they think short term, only to cry later when repairing their device is more expensive than buying a new one.
There’s a similar tension at play with GitHub’s rollout of mandatory 2FA: it really annoys me, adding TOTP didn’t improve my security by one iota (I already use KeepassXC), but many people do use insecure passwords, and you can’t tell by looking at their code. (In this analogy GitHub plays the role of the regulator.)
The problem with voluntary association is that most people don’t know what they’re associating with when they buy a new product.
I mean, you’re not wrong. But don’t you feel like the solution isn’t to infantilise people by treating them like they’re incapable of knowing?
For what it’s worth I fully support legislation enforcing “plain $LANGUAGE” contracts. Fraud is a species of violence; people should understand what they’re signing.
But by the same token, if people don’t care to research the repair costs of their devices before buying them … why is that a problem that requires legislation?
But don’t you feel like the solution isn’t to infantilise people by treating them like they’re incapable of knowing?
They’re not, if we give them access to the information, and there are alternatives. If all the major phone manufacturers produce locked down phones with impossible to swap components (pairing), that are supported only for 1 year, what’s people to do? If people have no idea how secure the authentication of someone is on GitHub, how can they make an informed decision about security?
But by the same token, if people don’t care to research the repair costs of their devices before buying them
When important stuff like that is prominently displayed on the package, it does influence purchase decisions. So people do care. But more importantly, a bad score on that front makes manufacturers look bad enough that they would quickly change course and sell stuff that’s easier to repair, effectively giving people more choice. So yeah, a bit of legislation is warranted in my opinion.
But the issue isn’t a technical one. It’s the costs involved in finding those creative ways, to hiring people to ensure compliance, and especially to new entrants to the field.
I’m not a business owner in this field but I did work at the engineering (and then product management, for my sins) end of it for years. I can tell you that, at least back in 2016, when I last did any kind of electronics design:
Ensuring “additional” compliance is often a one-time cost. As an EE, you’re supposed to know these things and keep up with them, you don’t come up with a schematic like they taught you in school twenty years ago and hand it over to a compliance consultant to make it deployable today. If there’s a major regulatory change you maybe have to hire a consultant once. More often than not you already have one or more compliance consultants on your payroll, who know their way around these regulations long before they’re ratified (there’s a long adoption process), so it doesn’t really involve huge costs. The additional compliance testing required in this bill is pretty slim and much of it is on the mechanical side. That is definitely not one-time but trivially self-certifiable, and much of the testing time will likely be cut by having some of it done on the supplier end (for displays, case materials etc.) – where this kind of testing is already done, on a much wider scale and with a lot more parameters, so most partners will likely cover it cost-free 12 months from now (and in the next couple of weeks if you hurry), and in the meantime, they’ll do it for a nominal “not in the statement of work” fee that, unless you’re just rebranding OEM products, is already present on a dozen other requirements, too.
An embarrassing proportion of my job consisted not of finding creative ways to fit a removable battery, but in finding creative ways to keep a fixed battery in place while still ensuring adequate cooling and the like, and then in finding even more creative ways to design (and figure out the technological flow, help write the servicing manual, and help estimate logistics for) a device that had to be both testable and impossible to take apart. Designing and manufacturing unrepairable, logistically-restricted devices is very expensive, too, it’s just easier for companies to hide its costs because the general public doesn’t really understand how electronics are manufactured and what you have to do to get them to a shop near them.
The intrinsic difficulty of coming up with a good design isn’t a major barrier of entry for new players any more than it is for anyone. Rather, most of them can’t materialise radically better designs because they don’t have access to good suppliers and good manufacturing facilities – they lack contacts, and established suppliers and manufacturers are squirrely about working with them because they aren’t going to waste time on companies that are here today and they’re gone tomorrow. When I worked on regulated designs (e.g. medical) that had long-term support demands, that actually oiled some squeaky doors on the supply side, as third-party suppliers are equally happy selling parts to manufacturers or authorised servicing partners.
Execs will throw their hands in the air and declare anything super-expensive, especially if it requires them to put managers to work. They aren’t always wrong but in this particular case IMHO they are. The additional design-time costs this bill imposes are trivial, and at least some of them can be offset by costs you save elsewhere on the manufacturing chain. Also, well-ran marketing and logistics departments can turn many of its extra requirements into real opportunities.
I don’t want any of these things more than I want improved waterproofing. Why should every EU citizen that has the same priorities I do not be able to buy a the device they want?
The law doesn’t prohibit waterproof devices. In fact, it makes clear expections for such cases. It mandates that the battery must be repleaceable without specialized tools and by any competent shop, it doesn’t mandate a user-replaceable battery.
And just to be clear: the unintended consequences here include the retardation of an industry that’s delivered us progress from 8 bit micros with 64KiB RAM to pervasive Internet and pocket supercomputers in one generation.
I don’t want to defend the bill (I’m skeptical of politicians making decisions on… just about anything, given how they operate) but I don’t think recourse to history is entirely justified in this case.
For one thing, good repairability and support for most of (if not throughout) a device’s useful lifetime was the norm for a good part of that period, and it wasn’t a hardware-only deal. Windows 3.1 was supported until 2001, almost twice longer than the bill demands. NT 3.1 was supported for seven years, and Windows 95 was supported for 6. IRIX versions were supported for 5 (or 7?) years, IIRC.
For another, the current state of affairs is the exact opposite of what deregulation was supposed to achieve, so I find it equally indefensible on (de)regulatory grounds alone. Manufacturers are increasingly convincing users to upgrade not by delivering better and more capable products, but by making them both less durable and harder to repair, and by restricting access to security updates. Instead of allowing businesses to focus on their customers’ needs rather than state-mandated demands, it’s allowing businesses to compensate their inability to meet customer expectations (in terms of device lifetime and justified update threshold) by delivering worse designs.
I’m not against that on principle but I’m also not a fan of footing the bill for all the extra waste collection effort and all the health hazards that generates. Private companies should be more than well aware that there’s no such thing as a free lunch.
For one thing, good repairability and support for most of (if not throughout) a device’s useful lifetime was the norm for a good part of that period
Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.
For another, the current state of affairs is the exact opposite of what deregulation was supposed to achieve
Deregulation is the “ground state”.
It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.
Conveniently, it fosters an environment of rapid growth in wealth, capability, and efficiency. Because when companies do what you’re suggesting - nerfing their products to drive revenue - customers go elsewhere.
Which is why you’ll see the greatest proponents of regulation are the companies themselves, these days. Anti-circumvention laws, censorship laws that are only workable by large companies, Government-mandated software (e.g. Korean banking, Android and iOS only identity apps in Australia) and so forth are regulation aimed against customers.
So there’s a part of me that thinks companies are reaping what they sowed, here. But two wrongs don’t make a right; the correct answer is to deregulate both ends.
Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.
Maybe. Most early home computers were expensive. People expected them to last a long time. In the late ’80s, most of the computers that friends of mine owned were several years old and lasted for years. The BBC Model B was introduced in 1981 and was still being sold in the early ‘90s. Schools were gradually phasing them out. Things like the Commodore 64 of Sinclair Spectrum had similar longevity. There were outliers but most of them were from companies that went out of business and so wouldn’t be affected by this kind of regulation.
It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.
That’s not really true. It assumes a balance of power that is exactly equal between companies and consumers.
Companies force people to upgrade by tying in services to the device and then dropping support in the services for older products. No one buys a phone because they want a shiny bit of plastic with a thinking rock inside, they buy a phone to be able to run programs that accomplish specific things. If you can’t safely connect the device to the Internet and it won’t run the latest apps (which are required to connect to specific services) because the OS is out of date, then they need to upgrade the OS. If they can’t upgrade the OS because the vendor doesn’t provide an upgrade and no one else can because they have locked down the bootloader (and / or not documented any of the device interfaces), then consumers have no choice to upgrade.
Conveniently, it fosters an environment of rapid growth in wealth, capability, and efficiency. Because when companies do what you’re suggesting - nerfing their products to drive revenue - customers go elsewhere.
Only if there’s another option. Apple controls their app store and so gets a 30% cut of app revenue. This gives them some incentive to support old devices, because they can still make money from them, but they will look carefully at the inflection point where they make more money from upgrades than from sales to older devices. For other vendors, Google makes money from the app store and they don’t[1] and so once a handset has shipped, the vendor has made as much money as they possibly can. If a vendor makes a phone that gets updates longer, then it will cost more. Customers don’t see that at point of sale, so they don’t buy it. I haven’t read the final version of this law, one of the drafts required labelling the support lifetime (which research has shown will have a big impact - it has a surprisingly large impact on purchasing decisions). By moving the baseline up for everyone, companies don’t lose out by being the one vendor to try to do better.
Economists have studied this kind of market failure for a long time and no one who actually does research in economics (i.e. making predictions and trying to falsify them, not going on talk shows) has seriously proposed deregulation as the solution for decades.
Economies are complex systems. Even Adam Smith didn’t think that a model with a complete lack of regulation would lead to the best outcomes.
[1] Some years ago, the Android security team was complaining about the difficulties of support across vendors. I suggested that Google could fix the incentives in their ecosystem by providing a 5% cut of all app sales to the handset maker, conditional on the phone running the latest version of Android. They didn’t want to do that because Google maximising revenue is more important than security for users.
Economists have studied this kind of market failure for a long time and no one who actually does research in economics (i.e. making predictions and trying to falsify them, not going on talk shows) has seriously proposed deregulation as the solution for decades.
Is the school of economics you’re talking about actual experimenters, or are they arm-chair philosophers? I trust they propose what you say they propose, but what actual evidence do they have?
I might sound like I’m dismissing an entire scientific discipline, but economics have shown strong signs of being extremely problematic on this front for a long time. One big red flag for instance is the existence of such long lived “schools”, which are a sign of dogma more than they’re a sign of sincere inquiry.
In fact, they dismiss the entire concept of market failure, because markets exist to provide pricing and a means of exchange, nothing more.
Assuming there’s no major misunderstanding, there’s another red flag right there: markets have a purpose now? Describing what markets do is one thing, but ascribing purpose to them presupposes some sentient entity put them there with intent. Which may very well be true, but then I would ask a historian, not an economist.
Now looking at the actual purpose… the second people exchange stuff for a price, there’s a pricing and a means of exchange. Those are the conditions for a market. Turning it around and making them the “purpose” of market is cheating: in effect, this is saying markets can’t fail by definition, which is quite unhelpful.
I might sound like I’m dismissing an entire scientific discipline, but economics have shown strong signs of being extremely problematic on this front for a long time.
This is why I specifically said practicing economists who make predictions. If you actually talk to people who do research in this area, you’ll find that they’re a very evidence-driven social science. The people at the top of the field are making falsifiable predictions based on models and refining their models when they’re wrong.
Economics is intrinsically linked to politics and philosophy. Economic models are like any other model: they predict what will happen if you change nothing or change something, so that you can see whether that fits with your desired outcomes. This is why it’s so often linked to politics and philosophy: Philosophy and politics define policy goals, economics lets you reason about whether particular actions (or inactions) will help you reach those goals. Mechanics is linked to engineering in the same way. Mechanics tells you whether a set of materials arranged in a particular way will be stable, engineering says ‘okay, we want to build a bridge’ and then uses models from mechanics to determine whether the bridge will fall down. In both cases, measurement errors or invalid assumptions can result in the goals not being met when the models say that they should be and in both cases these lead to refinements of the models.
One big red flag for instance is the existence of such long lived “schools”, which are a sign of dogma more than they’re a sign of sincere inquiry.
To people working in the field, the schools are just shorthand ways of describing a set of tools that you can use in various contexts.
Unfortunately, most of the time you hear about economics, it’s not from economists, it’s from people who play economists on TV. The likes of the Cato and Mises institutes in the article, for example, work exactly the wrong way around: they decide what policies they want to see applied and then try to tweak their models to justify those policies, rather than looking at what goals they want to see achieved and using the models to work out what policies will achieve those goals.
I really would recommend talking to economists, they tend to be very interesting people. And they hate the TV economists with a passion that I’ve rarely seen anywhere else.
Assuming there’s no major misunderstanding, there’s another red flag right there: markets have a purpose now?
Markets absolutely have a purpose. It is always a policy decision whether to allow a market to exist. Markets are a tool that you can use to optimise production to meet demand in various ways. You can avoid markets entirely in a planned economy (but please don’t, the Great Leap Forward or the early days of the USSR give you a good idea of how many people will die if you do). Something that starts as a market can end up not functioning as a market if there’s a significant power imbalance between producers and consumers.
Markets are one of the most effective tools that we have for optimising production for requirements. Precisely what they will optimise for depends a lot on the shape of the market and that’s something that you can control with regulation. The EU labelling rules on energy efficiency are a great example here. The EU mandated that white goods carry labels showing the score that they got on energy-efficiency tests. The labelling added information to customer and influenced their purchasing decisions. This created demand for more energy-efficient goods and the market responded by providing them. The eventually regulations banned goods below a certain efficiency rating but it was largely unnecessary because the market adjusted and most things were A rated or above when F ratings were introduced. It worked so well that they had to recalibrate the scale.
Unfortunately, most of the time you hear about economics, it’s not from economists, it’s from people who play economists on TV
I can see how such usurpation could distort my view.
Markets absolutely have a purpose. It is always a policy decision whether to allow a market to exist.
Well… yeah.
Precisely what [markets] will optimise for depends a lot on the shape of the market and that’s something that you can control with regulation. The EU labelling rules on energy efficiency are a great example here.
I love this example. Plainly shows that often people don’t make the choices they do because they don’t care about such and such criterion, they do so because they just can’t measure the criterion even if they cared. Even a Libertarian should admit that making good purchase decisions requires being well informed.
You can avoid markets entirely in a planned economy (but please don’t, the Great Leap Forward or the early days of the USSR give you a good idea of how many people will die if you do).
To be honest I do believe some select parts of the economy should be either centrally planned or have a state provider that can serve everyone: roads, trains, water, electricity, schools… Yet at the same time, other sectors probably benefit more from a Libertarian approach. My favourite example is the Internet: the fibre should be installed by public instances (town, county, state…), and bandwidth rented at a flat rate — no discount for bigger volumes. And then you just let private operators rent the bandwidth however they please, and compete among each other. The observed results in the few places in France that followed this plan (mostly rural areas big private providers didn’t want to invest in) was a myriad of operators of all sizes, including for-profit and non-profit ones (recalling what Benjamin Bayart said of the top of my head). This gave people an actual choice, and this diversity inherently makes this corner of the internet less controllable and freer.
A Libertarian market on top of a Communist infrastructure. I suspect we can find analogues in many other domains.
My favourite example is the Internet: the fibre should be installed by public instances (town, county, state…), and bandwidth rented at a flat rate — no discount for bigger volumes. And then you just let private operators rent the bandwidth however they please, and compete among each other.
This is great initially, but it’s not clear how you pay for upgrades. Presumably 1 Gb/s fibre is fine now, but at some point you’re going to want to migrate everyone to 10 Gb/s or faster, just as you wanted to upgrade from copper to fibre. That’s going to be capital investment. Does it come from general taxation or from revenue raised on the operators? If it’s the former, how do you ensure it’s equitable, if it’s the latter then you’re going to want to amortise the cost across a decade and so pricing sufficiently that you can both maintain the current infrastructure and save enough to upgrade to as-yet-unknown future technology can be tricky.
The problem with private ownership of utilities is that it encourages rent seeking and cutting costs at the expense of service and capital investment. The problem with public ownership is that it’s hard to incentivise efficiency improvements. It’s important to understand the failure modes of both options and ideally design hybrids that avoid the worst problems of both. The problem is that most politicians start with ‘privatisation is good’ or ‘privatisation is bad’ as an ideological view and not ‘good service, without discrimination, at an affordable price is good’ and then try to figure out how to achieve it.
Yes, that’s the point: the most capitalistic something is (extreme example: nuclear power plants), the more difficult private enterprises will want to invest in it, and if they do, the more they will want to extract rent from their investment. There’s also the thing about fibre (or copper) being naturally monopolistic, at least if you have a mind to conserve resources and not duplicate lines all over the place.
So there is a point where people must want the thing badly enough that the town/county/state does the investment itself. As it does for any public infrastructure.
Not saying this would be easy though. The difficulties you foresee are spot on.
The problem with public ownership is that it’s hard to incentivise efficiency improvements.
Ah, I see. Part of this can be solved by making sure the public part is stable, and the private part easy to invest on. For instance, we need boxes and transmitters and whatnot to lighten up the fibre. I speculate that those boxes are more liable to be improved than the fibre itself, so perhaps we could give them to private interests. But this is reaching the limits of my knowledge of the subject, I’m not informed enough to have an opinion on where the public/private frontier is best placed.
The problem is that most politicians start with ‘privatisation is good’ or ‘privatisation is bad’ as an ideological view and not ‘good service, without discrimination, at an affordable price is good’ and then try to figure out how to achieve it.
Yes, that’s the point: the most capitalistic something is (extreme example: nuclear power plants), the more difficult private enterprises will want to invest in it, and if they do, the more they will want to extract rent from their investment
There’s a lot of nuance here. Private enterprise is quite good at high-risk investments in general (nuclear power less so because it’s regulated such that you can’t just go bankrupt and walk away, for good reasons). A lot of interesting infrastructure were possible because private investors gambled and a lot of them lost a big pile of money. For example, the Iridium satellite phone network cost a lot to deliver and did not recoup costs. The initial investors lost money, but then the infrastructure was for sale at a bargain price and so it ended up being operated successfully. It’s not clear to me how public investment could have matched that (without just throwing away tax payers’ money).
This was the idea behind some of the public-private partnership things that the UK government pushed in the ‘90s (which often didn’t work, you can read a lot of detailed analyses of why not if you search for them): you allow the private sector to take the risk and they get a chunk of the rewards if the risk pays off but the public sector doesn’t lose out if the risk fails. For example, you get a private company to build a building that you will lease from them. They pay all of the costs. If you don’t need the building in five years time then it’s their responsibility to find another tenant. If the building needs unexpected repairs, they pay for them. If everything goes according to plan, you pay a bit more for the building space than if you’d built, owned, and operated it yourself. And you open it out to competitive bids, so if someone can deliver at a lower cost than you could, you save money.
Some procurement processes have added variations on this where the contract goes to the second lowest bidder or they the winner gets paid what the next-lowest bidder asked for. The former disincentivises stupidly low bids (if you’re lower than everyone else, you don’t win), the latter ensures that you get paid as much as someone else thought they could deliver, reducing risk to the buyer. There are a lot of variations on this that are differently effective and some economists have put a lot of effort into studying them. Their insights, sadly, are rarely used.
So there is a point where people must want the thing badly enough that the town/county/state does the investment itself. As it does for any public infrastructure.
The dangerous potholes throughout UK roads might warn you that this doesn’t always work.
A lot of interesting infrastructure were possible because private investors gambled and a lot of them lost a big pile of money.
Good point. We need to make sure that these gambles stay gambles, and not, say, save the people who made the bad choice. Save their company perhaps, but seize it in the process. We don’t want to share losses while keeping profits private — which is what happens more often than I’d like.
This was the idea behind some of the public-private partnership things that the UK government pushed in the ‘90s (which often didn’t work, you can read a lot of detailed analyses of why not if you search for them)
The intent is good indeed, and I do have an example of a failure in mind: water management in France. Much of it is under a private-public partnership, with Veolia I believe, and… well there are a lot of leaks, a crapton of water is wasted (up to 25% in some of the worst cases), and Veolia seems to be making little more than a token effort to fix the damn leaks. Probably because they don’t really pay for the loss.
The dangerous potholes throughout UK roads might warn you that this doesn’t always work.
It’s often a matter oh how much money you want to put in. Public French roads are quite good, even if we exclude the super highways (those are mostly privatised, and I reckon in even better shape). Still, point taken.
The EU labelling rules on energy efficiency are a great example here. The EU mandated that white goods carry labels showing the score that they got on energy-efficiency tests. The labelling added information to customer and influenced their purchasing decisions. This created demand for more energy-efficient goods and the market responded by providing them.
Were they actually successful, or did they only decrease operating energy use? You can make a device that uses less power because it lasts half as long before it breaks, but then you have to spend twice as much power manufacturing the things because they only last half as long.
I don’t disagree with your comment, by the way. Although, part of the problem with planned economies was that they just didn’t have the processing power to manage the entire economy; modern computers might make a significant difference, the only way to really find out would be to set up a Great Leap Forward in the 21st century.
Were they actually successful, or did they only decrease operating energy use?
I may be misunderstanding your question but energy ratings aren’t based on energy consumption across the device’s entire lifetime, they’re based on energy consumption over a cycle of operation of limited duration, or a set of cycles of operations of limited duration (e.g. a number of hours of functioning at peak luminance for displays, a washing-drying cycle for washer-driers etc.). You can’t get a better rating by making a device that lasts half as long.
Energy ratings and device lifetimes aren’t generally linked by any causal relation. There are studies that suggest the average lifetime for (at least some categories of) household appliances have been decreasing in the last decades, but they show about the same thing regardless of jurisdiction (i.e. even those without labeling or energy efficiency rules, or with different labeling rules) and it’s a trend that started prior to energy efficiency labeling legislation in the EU.
You can’t get a better rating by making a device that lasts half as long.
Not directly, but you can e.g. make moving parts lighter/thinner, so they take less power to move but break sooner as a result of them being thinner.
but they show about the same thing regardless of jurisdiction (i.e. even those without labeling or energy efficiency rules, or with different labeling rules) and it’s a trend that started prior to energy efficiency labeling legislation in the EU.
Not directly, but you can e.g. make moving parts lighter/thinner, so they take less power to move but break sooner as a result of them being thinner.
For household appliances, energy ratings are given based on performance under full rated capacities. Moving parts account for a tiny fraction of that in washing machines and washer-driers, and for a very small proportion of the total operating power in dishwashers and refrigerators (and obviously no proportion for electronic displays and lighting sources). They’re also given based on measurements of KWh/cycle rounded to three decimal places.
I’m not saying making some parts lighter doesn’t have an effect for some of the appliances that get energy ratings, but that effect is so close to the rounding error that I doubt anyone is going to risk their warranty figures for it. Lighter parts aren’t necessarily less durable, so if someone’s trying to get a desired rating by lightening the nominal load, they can usually get the same MTTF with slightly better materials, and they’ll gladly swallow some (often all) of the upfront cost just to avoid dealing with added uncertainty of warranty stocks.
Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.
The major problem with orphans was lack of access to proprietary parts – they were otherwise very repairable. The few manufacturers that can afford proprietary parts today (e.g. Apple) aren’t exactly at risk of going under, which is why that fear is all but gone today.
I have like half a dozen orphan boxes in my collection. Some of them were never sold on Western markets, I’m talking things like devices sold only on the Japanese market for a few years or Soviet ZX Spectrum clones. All of them are repairable even today, some of them even with original parts (except, of course, for the proprietary ones, which aren’t manufactured anymore so you can only get them from existing stocks, or use clone parts). It’s pretty ridiculous that I can repair thirty year-old hardware just fine but if my Macbook croaks, I’m good for a new one, and not because I don’t have (access to) equipment but because I can’t get the parts, and not because they’re not manufactured anymore but because no one will sell them to me.
It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.
Deregulation was certainly meant to achieve a lot of things in particular. Not just general outcomes, like a more competitive landscape and the like – every major piece of deregulatory legilslation has had concrete goals that it sought to achieve. Most of them actually achieved them in the short run – it was conserving these achievements that turned out to be more problematic.
As for companies not being able to force customers not to upgrade, repair or tinker with their devices, that is really not true. Companies absolutely can and do force customers to not upgrade or repair their devices. For example, they regularly use exclusive supply deals to ensure that customers can’t get the parts they need for it, which they can do without leveraging any government-mandated regulation.
Some of their means are regulation-based – e.g. they take them customers or third-parties to court (see e.g. Apple. For most devices, tinkering with them in unsupported ways is against the ToS, too, and while there’s always doubt on how much of that is legally enforceable in each jurisdiction out there, it still carries legal risk, in addition to the weight of force in jurisdictions where such provisions have actually been enforced.
This is very far from a state of minimal initiation of force. It’s a state of minimal initiation of force on the customer end, sure – customers have little financial power (both individually and in numbers, given how expensive organisation is), so in the absence of regulation they can leverage, they have no force to initiate. But companies have considerable resources of force at their disposal.
It’s not like there was heavy progress the last 10 years on smartphone hardware.
Since 2015 every smartphone is the same as the previous model, with a slightly better camera and a better chip. I don’t see how the regulation is making progress more difficult. IMHO it will drive innovation, phones will have to be made more durable.
And, for most consumers, the better camera is the only thing that they notice. An iPhone 8 is still massively overpowered for what a huge number of consumers need, and it was released five years ago. If anything, I think five years is far too short a time to demand support.
Until that user wants to play a mobile game–in which like PC hardware specs were propelled by gaming, so is the mobile market driven by games which I believe is now the most dominant gaming platform.
I don’t think the games are really that CPU / GPU intensive. It’s definitely the dominant gaming platform, but the best selling games are things like Candy Crush (which I admit to having spent far too much time playing). I just upgraded my 2015 iPad Pro and it was fine for all of the games that I tried from the app store (including the ones included with Netflix and a number of the top-ten ones). The only thing it struggled with was the Apple News app, which seems to want to preload vast numbers of articles and so ran out of memory (it had only 2 GiB - the iPhone version seems not to have this problem).
The iPhone 8 (five years old) has an SoC that’s two generations newer than my old iPad, has more than twice as much L2 cache, two high-performance cores that are faster than the two cores in mine (plus four energy-efficient cores, so games can have 100% use of the high-perf ones), and a much more powerful GPU (Apple in-house design replacing a licensed PowerVR one in my device). Anything that runs on my old iPad will barely warm up the CPU/GPU on an iPhone 8.
I don’t think the games are really that CPU / GPU intensive
But a lot are intensive & enthusiasts often prefer it. But still those time-waster types and e-sports tend to run on potatoes to grab the largest audience.
Anecdotally, I recently was reunited with my OnePlus 1 (2014) running Lineage OS, & it was choppy at just about everything (this was using the apps from when I last used it (2017) in airplane mode so not just contemporary bloat) especially loading map tiles on OSM. I tried Ubuntu Touch on it this year (2023) (listed as great support) & was still laggy enough that I’d prefer not to use it as it couldn’t handle maps well. But even if not performance bottle-necked, efficiency is certainly better (highly doubt it’d save more energy than the cost of just keeping an old device, but still).
My OnePlus 5T had an unfortunate encounter with a washing machine and tumble dryer, so now the cellular interface doesn’t work (everything else does). The 5T replaced a first-gen Moto G (which was working fine except that the external speaker didn’t work so I couldn’t hear it ring. I considered that a feature, but others disagreed). The Moto G was slow by the end. Drawing maps took a while, for example. The 5T was fine and I’d still be using it if I hadn’t thrown it in the wash. It has an 8-core CPU, 8 GiB of RAM, and an Adreno 540 GPU - that’s pretty good in comparison to the laptop that I was using until very recently.
I replaced the 5T with a 9 Pro. I honestly can’t tell the difference in performance for anything that I do. The 9 Pro is 4 years newer and doesn’t feel any faster for any of the apps or games that I run (and I used it a reasonable amount for work, with Teams, Word, and PowerPoint, which are not exactly light apps on any platform). Apparently the GPU is faster and the CPU has some faster cores but I rarely see anything that suggests that they’re heavily loaded.
Original comment mentioned iPhone 8 specifically. Android situation is completely different.
Apple had a significant performance lead for a while. Qualcomm just doesn’t seem to be interested in making high-end chips. They just keep promising that their next-year flagship will be almost as fast as Apple’s previous-year baseline. Additionally there are tons of budget Mediatek Androids that are awfully underpowered even when new.
Flagship Qualcomm chips for Android chips been fine for years & more than competitive once you factor in cost. I would doubt anyone is buying into either platform purely based on performance numbers anyhow versus ecosystem and/or wanting hardware options not offered by one or the other.
Those are some cherry-picked comparisons. Apple release on a different cadence. You check right now, & S23 beats up on it as do most flagship now. If you blur the timing, it’s all about the same.
With phones of the same tier released before & after you can see benchmarks are all close as is battery life. Features are wildly different tho since Android can offer a range of different hardware.
I think you’re really discounting the experiences of consumers to say they don’t notice the UI and UX changes made possible on the Android platform by improvements in hardware capabilities.
I notice that you’re not naming any. Elsewhere in the thread, I pointed out that I can’t tell the difference between a OnePlus 5T and a 9 Pro, in spite of them being years apart in releases. They can run the same version of Android and the UIs seem identical to me.
I didn’t think I had to. Android 9, 10, 11, 12 have distinct visual styles, and between vendors this distinction can further - this may be less apparent on OnePlus as they use their own OxygenOS (AOSP upstream ofc) (or at least, used to), but consumers notice even if they can’t clearly relate what they’ve noticed.
I’m using LimeageOS and both phones are running updated versions of the OS. Each version has made the settings app more awful but I can’t point to anything that’s a better UI or anything that requires newer hardware. Rendering the UI barely wakes up the GPU on the older phone. So what is new, better, and is enabled by newer hardware?
I can’t argue either way for “better”, I’m not the market. Newer hardware generally has better capability for graphics processing, leading to more reactive displays at higher refresh rates, and enabling compositing settings and features that otherwise wouldn’t run at an acceptable frame rate.
LineageOS is an AOSP build specifically designed to run fast and support legacy hardware, and is designed to look the same on all that hardware. It’s not a fair comparison to what people like to see with smartphone interfaces and launchers etc.
I can’t argue either way for “better”, I’m not the market. Newer hardware generally has better capability for graphics processing, leading to more reactive displays at higher refresh rates, and enabling compositing settings and features that otherwise wouldn’t run at an acceptable frame rate.
So please name one of them. A 2017 phone can happily run a 1080p display at a fast enough refresh that I’ve no idea what it is because it’s faster than my eyes can detect, with a full compositing UI. Mobile GPUs have been fast enough to composite every UI element from a separate texture, running complex pixel shaders on them, for ten years. OS X started doing this on laptops over 15 years ago, with integrated Intel graphics cards that are positively anaemic in comparison to anything in a vaguely recent phone. Android has provided a compositing UI toolkit from day one. Flutter, with its 60FPS default, runs very happily on a 2017 phone.
LineageOS is an AOSP build specifically designed to run fast and support legacy hardware, and is designed to look the same on all that hardware. It’s not a fair comparison to what people like to see with smartphone interfaces and launchers etc.
If it helps, I’m actually using the Microsoft launcher on both devices. But, again, you’re claiming that there are super magic UI features that are enabled by new hardware without saying what they are.
All innovation isn’t equal. Innovation that isn’t wanted by customers or their suppliers is malinvestment - a waste of human capacity, wealth, and time.
Innovation that isn’t wanted by customers or their suppliers is malinvestment - a waste of human capacity, wealth, and time.
What makes you think that this innovation is not wanted by customers?
There is innovation that is wanted by customers, but manufacturers don’t provide it because it goes against their interest. I think it’s a lie invisible-hand-believers tell themselves when claiming that customers have a choice between a fixable phone and a glued-phone with an appstore. Of course customers will chose the glued-phone with an app store, because they want a usable phone first. But this doesn’t mean they don’t want a fixable phone, it means that they were given a Hobson’s choice
but manufacturers don’t provide it because it goes against their interest.
The light-bulb cartel is the single worst example you could give; incandescent light-bulbs are dirt-cheap to replace and burning them hotter ends up improving the quality of their light (i.e. color) dramatically, while saving more in reduced power bills than they cost from shorter lifetimes. This 30min video by Technology Connections covers the point really well.
This cynical view is unwarranted in the case of EU, which so far is doing pretty well avoiding regulatory capture.
EU has a history of actually forcing companies to innovate in important areas that they themselves wouldn’t want to, like energy efficiency and ecological impact. And their regulations are generally set to start with realistic requirements, and are tightened gradually.
Not everything will sort itself out with consumers voting with their wallets. Sometimes degenerate behaviors (like vendor lock-in, planned obsolescence, DRM, spyware, bricking hardware when subscription for it expires) universally benefit companies, so all choices suck in one way or another. There are markets with high barriers to entry, especially in high-end electronics, and have rent-seeking incumbents that work for their shareholders’ interests, not consumers.
Ecodesign worked out wonderfully for vacuum cleaners, but that’s an appliance that hasn’t meaningfully changed since the 1930s. (You could argue that stick vacuum cleaners are different, but ecodesign certainly didn’t prevent them from entering the market)
The smartphone market has obviously been stagnating for a while, so it’ll be interesting to see if ecodesign can shake it up.
Ecodesign worked out wonderfully for vacuum cleaners, but that’s an appliance that hasn’t meaningfully changed since the 1930s
I strongly disagree here. They’ve changed massively since the ’90s. Walking around a vacuum cleaner shop in the ’90s, you had two choices of core designs. The vast majority had a bag that doubled as an air filter, pulling air through the bag and catching dust on the way. This is more or less the ’30s design (though those often had separate filters - there were quite a lot of refinements in the ’50s and ’60s - in the ’30s they were still selling ones that required a central compressor in the basement with pneumatic tubes that you plugged the vacuum cleaner into in each room).
Now, if you buy a vacuum cleaner, most of them use centrifugal airflow to precipitate heavy dust and hair, along with filters to catch the finer dust. Aside from the fact that both move air using electric motors, this is a totally different design to the ’30s models and to most of the early to mid ’90s models.
More recently, cheap and high-density lithium ion batteries have made cordless vacuums actually useful. These have been around since the ‘90s but they were pointless handheld things that barely functioned as a dustpan and brush replacement. Now they’re able to replace mains-powered ones for a lot of uses.
Oh, and that’s not even counting the various robot ones that can bounce around the floor unaided. These, ironically, are the ones whose vacuum-cleaner parts look the most like the ’30s design.
Just to add to that, the efficiency of most electrical home appliances has improved massively since the early ‘90s. With a few exceptions, like things based on resistive heating, which can’t improve much because of physics (but even some of those got replaced by devices with alternative heating methods) contemporary devices are a lot better in terms of energy efficiency. A lot of effort went into that, not only on the electrical end, but also on the mechanical end – vacuum cleaners today may look a lot like the ones in the 1930s but inside, from materials to filters, they’re very different. If you handed a contemporary vacuum cleaner to a service technician from the 1940s they wouldn’t know what to do with it.
Ironically enough, direct consumer demand has been a relatively modest driver of ecodesign, too – most consumers can’t and shouldn’t be expected to read power consumption graphs, the impact of one better device is spread across at least a two months’ worth of energy bills, and the impact of better electrical filtering trickles down onto consumers, so they’re not immediately aware of it. But they do know to look for energy classes or green markings or whatever.
But they do know to look for energy classes or green markings or whatever.
The eco labelling for white goods was one of the inspirations for this law because it’s worked amazingly well. When it was first introduced, most devices were in the B-C classification or worse. It turned out that these were a very good nudge for consumers and people were willing to pay noticeably more for higher-rated devices, to the point that it became impossible to sell anything with less than an A rating. They were forced to recalibrate the scheme a year or two ago because most things were A+ or A++ rated.
It turns out that markets work very well if customers have choice and sufficient information to make an informed choice. Once the labelling was in place, consumers were able to make an informed choice and there was an incentive for vendors to provide better quality on an axis that was now visible to consumers and so provided choice. The market did the rest.
Labeling works well when there’s a somewhat simple thing to measure to get the rating of each device - for a fridge it’s power consumption. It gets trickier when there’s no easy way to determine which of two devices is “better” - what would we measure to put a rating on a mobile phone or a computer?
I suppose the main problem is that such devices are multi-purpose - do I value battery life over FLOPS, screen brightness over resolution, etc. Perhaps there could be a multi-dimensional rating system (A for battery life, D for gaming performance, B for office work, …), but that gets unpractical very quickly.
There’s some research by Zinaida Benenson (I don’t have the publication to hand, I saw the pre-publication results) on an earlier proposal for this law that looked at adding two labels:
The number of years that the device would get security updates.
The maximum time between a vulnerability being disclosed and the device getting the update.
The proposal was that there would be statutory fines for devices that did not comply with the SLA outlined in those two labels but companies are free to put as much or as little as they wanted. Her research looked at this across a few consumer good classes and used the standard methodology where users were shown a small number of devices with different specs and different things on these labels and then asked to pick their preference. This was then used to vary price, features, and security SLA. I can’t remember the exact numbers but she found that users consistently were willing to select higher priced things with better security guarantees, and favoured them over some other features.
All the information I’ve read points to centrifugal filters not being meaningfully more efficient or effective than filter bags, which is why these centrifugal cylones are often backed up by traditional filters. Despite what James Dyson would have us believe, building vacuum cleaners is not like designing a Tokamak. I’d use them as an example of a meaningless change introduced to give consumers an incentive to upgrade devices that otherwise last decades.
Stick (cordless) vacuums are meaningfully different in that the key cleaning mechanism is no longer suction force. The rotating brush provides most of the cleaning action, coupled with a (relatively) weak suction provided by the cordless engines. This makes them vastly more energy-efficient, although this is probably cancelled out by the higher impact of production, and the wear and tear on the components.
It also might be a great opportunity for innovation in modular design. Say, Apple is always very proude when they come up with a new design. Remember a 15 min mini-doc on their processes when they introduced unibody macbooks? Or 10 min video bragging about their laminated screens?
I don’t see why it can’t be about how they designed a clever back cover that can be opened without tools to replace the battery and also waterproof. Or how they came up with a new super fancy screen glass that can survive 45 drops.
Depending on how you define “progress” there can be a plenty of opportunities to innovate. Moreover, with better repairability there are more opportunities for modding. Isn’t it a “progress” if you can replace one of the cameras on your iPhone Pro with, say, infrared camera? Definitely not a mainstream feature to ever come to mass-produced iPhone but maybe a useful feature for some professionals. With available schematics this might have a chance to actually come to market. There’s no chance for it to ever come to a glued solid rectangle that rejects any part but the very specific it came with from the factory.
That’s one way to think about it. Another is that shaping markets is one of the primary jobs of the government, and a representative government – which, for all its faults, the EU is – delegates this job to politics. And folks make a political decision on the balance of equities differently, and … well, they decide how the markets should look. I don’t think that “innovation” or “efficiency” at providing what the market currently provides is anything like a dispositive argument.
This overall shift will favor long term R&D investments of the kind placed before our last two decades of boom. It will improve innovation in the same way that making your kid eats vegetables improves their health. This is necessary soil for future booms.
What the hell does “manufacturers will have to make compatible software updates available for at least 5 years” mean? Who determines what bugs now legally need to be fixed on what schedule? What are the conditions under which this rule is considered to have been breached? What if the OS has added features that are technically possible on older devices but would chew up batteries too quickly because of missing accelerators found on newer devices? This is madness.
Took me all of 10 minutes to find the actual law instead of a summary. All of the questions you have asked have pretty satisfying answers there IMO.
From the “Operating system updates” section:
from the date of end of placement on the market to at least 5 years after that date, manufacturers, importers or authorised representatives shall, if they provide security updates, corrective updates or functionality updates to an operating system, make such updates available at no cost for all units of a product model with the same operating system;
If you fix a bug or security issue, you have to backport it to all models you’ve offered for sale in the past 5 years. If your upstream (android) releases a security fix, you must release it within 4 months (part c of that section).
Part F says if your updates slow down old models, you have to restore them to good performance within “a reasonable time” (yuck, give us a number). An opt-in feature toggle that enables new features but slows down the device is permitted, which I suspect is how your last question would be handled in practice.
The real scary thing is that it took users weeks to notice that it shipped, despite that it wasn’t obfuscated in any way. This shows how risky the ecosystem is, without enough eyes reviewing published crates. If any high profile crate author gets infected with malware that injects itself into crates, it’s going to be an apocalypse for Rust.
I think it’s only a sign that we’re unaware until this hits a sandboxed / reproducable build system. I guess that’s currently distribution packaging or projects that otherwise use Nix or Bazel to build.
If the complaint is that binaries are more difficult to audit than source, and no one is auditing, then it should make no difference either way from a security perspective.
I think “weeks” is a bit of an exaggeration. People were openly discussing it at least a week after release. It’s true though that it didn’t blow up on social media until weeks later and many people didn’t realise until then.
If it had been a security issue or it was done by someone much less reputable than the author of serde or if the author did not respond then I suspect rustsec may have been more motivated to post an advisory.
Something that I might have expected to see included in this comment, and that I instead will provide myself, is a plug for bothering to review the code in one’s (prospective) dependencies, or to import reviews from trusted other people (or, put differently, to limit oneself to dependencies that one is able and willing to review or that someone one trusts has reviewed).
I recall that kornel at least used to encourage the use of cargo-crev, and their Lib.rs now also shows reviews from the newer and more streamlined cargo-vet.
I note that the change adding the blob to Serde was reviewed and approved through cargo-vet by someone at Mozilla. I don’t think that necessarily means these reviewing measures would not be useful in a situation that isn’t as much a drill (i.e., with a blob more likely to be malicious).
Yeah - my recollection of crev is that libraries like serde often got reviews like “it’s serde, might as well be the stdlib, I trust this without reviewing it as the chances of it being malicious are basically zero”
Ah, the days when the entire campus shared a T-1 connection and it was just easier for UCB to mail you a 9-track of BSD 4.2 than to try and download it.
Apparently he wrote this in 1981 [1]. It’s amazing how over 40 years later, the premise still holds true. There seems to be a fundamental principle at play that could not be broken with technological innovations. Makes me wonder if there is a physical law that limits the speed of transfer of information depending on the mass and energy that is used to transmit it.
We’re moving a ton of data around these days, for $REASONS. I have repeatedly pointed to this example when people respond to the station wagon analogy with “that’s just something you oldsters say…that doesn’t matter now”. Suddenly, a dim light goes on (an LED, not a light bulb).
As that xkcd says, “all of that data is coming from somewhere, and going somewhere.”
To really hammer home how quickly the price of storage is still shrinking, the article suggests about 900-1000 USD for a 1TB SSD, but you can buy a 1TB Western Digital Black NVMe on Amazon.com right now for $50.
Perhaps? What’s the signal power of matter you take along? You just take e = mc²? I haven’t thought this through in detail, I was just wondering as I wrote the comment.
With digital data transfer you have to essentially take 1 electron at a time to the new destination.
With physical data transfer you can take trillions of electrons at a time to the new destination.
Your comment about e = mc² really does seem to touch this. We can only fling 1 electron near / at the speed of light at a time. So there’s a hard limit on how much data / second you can do. We can carry tons of electrons though at a time.
So we’re optimizing for “sending the least information that makes the most sense to us in the least time”, more or less. We don’t need to be sending each other 100GB videos, and we don’t. It’s much faster to send a 1GB video that is as recognizable as a 10GB video over the internet. It comes down to “we want to see something as fast as possible”. i.e. what most people say about this topic: optimizing for latency.
I think it’s better to see things as “digital objects”, like for example, if you wanted the Ethereum blockchain. It’d be faster to get it on a flash drive than downloading it for most people, but it can’t be broken down further than its “useful unit” i.e. itself.
100% awesome thought :)
Someone could probably graph when these thresholds cross? :o
With digital data transfer you have to essentially take 1 electron at a time to the new destination
Most bulk data transfer these days is over fibre. With modern fibre, you can send multiple wavelengths simultaneously. The upper limit on this is one symbol per photon, though I’m not sure that there is a theoretical limit on the number of distinct symbols. Practical limits depend on the discrimination of the receiver and the flexibility of the bends (different wavelengths travel at different speeds because they retract differently so unless you have a perfectly straight fibre and send to photons down perfectly straight, there are limits on how closely you can space them).
Similarly, flash doesn’t store one bit per electron. Each cell holds a charge and that encodes one symbol. With MLC, there are typically 4-8 possible values per cell. In theory, you could build storage where you use different energy levels in electrons on a single atom to represent this. Quantum mechanics tells you both how many possible symbol values you can store and why it’s really hard to build something useful like this.
For both forms of transfer, the practical limitations of the specific technology have much more impact than the theoretical limits.
It’s also worth noting that most of these comparisons ignore the bus speed on the removable storage. My home Internet can download (but not upload, because my ISP is stupid) data faster than I can read from cheap flash drives. Commercial fibre is faster that most local storage unless you are able to read and write a lot in parallel.
Yeah, the electron bit was just to simplify thinking around things, but you’re totally right. And good point about bus speed. I think the counter argument there is “you have the data at your disposal now”, so you don’t need to read it all off onto your primary storage device?
Last night before sleeping I basically concluded that if you can just add more wires (and I guess as you explained, wavelengths) then you can always beat the “practical carry limit” of physical data transfer… Except it seems you can always store more before you can transfer faster, at least that’s the trend right?
I’d argue the principle is Latency Bandwidth relation.
You can absolutely start pushing enough bandwidth to beat the pidgeon over most ordinary wire. But the protocol involved will mean your latency will suffer and normal usage will be near impossible.
For example, you could write a protocol that spends a minute probing out the exact physical properties of the transfer wires between source and destination so it can model the wire and figure out how to blast through a 100 gigabit packet of data once. The receiver will be spending that hour much the same trying to probe out the wire. Once the blast is complete you’d have to remeasure, as it’s now another minute and the wire might have warmed up, changing properties. Plus the receiver now needs to process this gigabit blast and ACK the reception (even just hashing it to make sure it got there will take a moment with 100 gigabits of data). Retransmissions will cost more time. But ultimately this incredibly horrific latency buys you a lot of bandwidth on average, not a lot in the median.
On the flipside, you can not do that and get much lower average bandwidth but much more usable latency guarantees, plus a better median bandwidth.
That ad blocker plea seems like a great case study of how fundamentally ad supported content is at odds with ethical disclosure. The plea is fairly convincing but when I went to the uBlock panel to allow Ethical Ads I saw it is using DoubleClick.
Ethical ads are only ethical under the (misguided, imo) framing that ads are bad because of the surveillance & privacy invasion, rather than the framing that ads are bad because they are researched attempts at modifying your behavior to suit corporate ends.
If you want to make money off of me find a less scummy way than subtly massaging my personality. Exposing me to visual/linguistic malware.
Way off topic, but I think you can split ads into two categories:
Those designed to inform about a product.
Those designed to manipulate people.
The former class is essential to business and, I would argue, is ethical. A lot of early Google ads were like this, they told you about a thing that’s relevant to the thing you’re reading about. A lot of magazine articles in technical magazines are like this too: they include technical specifications and other information that you need to make an informed decision. I used to buy a magazine called Computer Shopper that was well over 50% ads, and I bought it in a large part because of the ads: they gave me a great overview of what was available and at what prices from a load of different manufacturers. A lot of trade magazines are like this.
I would love to see something that properly incentivises this kind of advert. I strongly suspect that, for companies that have a decent product, it actually works better because it gets their product to the people who others are most likely to ask for recommendations.
Offtopic: I’d argue that any ad is in the second category: manipulation. The sole purpose of an ad is to hijaak your attention to imprint some information on you. Making me aware without asking that a certain tv exist that costs X and has feature Y is manipulating me to take it in consideration for buying.
A business that only wants to publish information about the tv could just create a product page or website.
Ontopic: this was my favorite talk of the conference! Both technical and entertaining.
Precisely! I don’t mind publishers getting a bunch of semi-anonymous details about me. I mind every search and every website I make in the future becoming less helpful, less interesting, and less varied by everyone involved thinking they know which of my interests I might be willing to spend money on at any particular time.
I think the DoubleClick ads are the YouTube iframe. I can’t really do anything about that unless I repost the video on my CDN, which I don’t have the energy to do today. Maybe tomorrow.
Then obviously you shouldn’t tell people to turn off their ad blockers, as most ad blockers will allow those DoubleClick scripts when disabled for your site.
One reason I’m excited about the digital-€ project is that they are aiming for very low, even free, transaction costs, which would be key for micropayments. So I could actually pay something for articles I’d like to read. And hopefully people wouldn’t pay for the content that is now pushed just for getting you to view ads. There’s also https://webmonetization.org/
I think there are some big psychological problems with micropayments. People aren’t good at estimating tiny numbers multiplied by other numbers that they’re also bad at estimating. How many web pages do you visit per month? How much will you pay if you are paying each one 0.1¢ per view? What about 1¢ or 0.01¢? I honestly couldn’t tell you how much I’d be paying per month for these.
A big part of the success of things like Spotify and Netflix is that they are able to offer a flat rate that lets you avoid thinking about this. If you could pay $10/month to support content creators and have a privacy-preserving system that would reward the ones that you valued, I think a lot of people would sign up. I actually designed such a system and pitched it to the chief economist at MS a few years back. Sadly, the feedback was that, if we tried to deploy such a system, Google would be in a position to scale it up faster (and it’s the kind of thing where second place is largely worthless), so it didn’t make commercial sense to do so.
I don’t know if the problem is actually the nature of micropayments. Tens of millions of people already use the micropayment systems in video games where the currencies cost real money - because they develop loose heuristics for how much virtual items are worth. Sure, they usually buy far fewer items than one might imagine for a internet-scale content micropayment system - but I don’t see any reason that that intuition that is proven to work currently wouldn’t scale to larger volumes, especially because many tens of thousands play games such as EVE Online where transaction volumes do approach (or exceed) that of the real world.
At the very least, having a subscription-style system like the one that you describe would provide the infrastructure to test such a microtransaction system.
Tens of millions of people already use the micropayment systems in video games where the currencies cost real money
Do you have a citation for that? Last time I looked at the economics of these systems, the overwhelming majority of spending came from a tiny fraction of players, which the industry referred to as ‘whales’. The games are specifically designed to trigger an addition response in vulnerable people and make Google Ads look ethical in comparison.
This study[1] had 688 of 1000 (69%) respondents self-report that they spent money on Fortnite. In 2020, Fortnite had ~80 million monthly active users (discarding the 320 million users that aren’t currently active). We’re not going to get data from the company itself, but it’s highly plausible that tens of millions of people engage in Fortnite’s microtransaction system alone, ignoring smaller systems like EVE Online and Diablo 4 and Genshin Impact (and all the other gatcha games).
Last time I looked at the economics of these systems, the overwhelming majority of spending came from a tiny fraction of players
While we don’t have the statistical distribution of spending patterns, the fact that millions (minimum) of people use these systems mean that even if the industry of free-to-play games is mostly exploitative (which I agree, it is), millions of people have at least some experience with these systems, and it’s highly probable that there’s a significant intersection between active users and paid users (that is, that many users are familiar with the system, as opposed to only having touched it once and never again).
From first principles - humans don’t have a natural intuition for “real” money and “normal” purchase patterns, either - it has to be learned. I don’t see any plausible reason to believe that it’s substantially more difficult for humans to learn to use a microtransaction system than the “normal” ones that we have now.
As to your earlier point:
People aren’t good at estimating tiny numbers multiplied by other numbers that they’re also bad at estimating.
…people will develop that ability to estimate and/or budget if they actually have to use such a system. You could make a similar claim about our current financial system - “People are bad at estimating how much it costs to buy a $6 Starbucks every workday for a month” - and while initially that’s true, over time some people do learn to get good at estimating that, while other people will actually sit down and work out how much that costs - it’s the same mechanic for microtransactions.
How many web pages do you visit per month? How much will you pay if you are paying each one 0.1¢ per view?
I opened Firefox’s history, selected the month of July, and saw that I visited 4000 pages. At 0.1c per view, that’s $4 - not a problem. At 1c per view, that’s $40 - a reasonable entertainment budget that is comparable in size to going out for dinner with my wife once.
Yes, it took me one full minute to do that math, and that’s not instantaneous, but not everything in our lives has to be instantaneous or built-in intuition. (plus, now that I’ve computed that result, I can cache it mentally for the foreseeable future) I really think that you’re significantly overestimating the amount of work it takes to do these kinds of order-of-magnitude calculations, and under-estimating how adaptable humans are.
As for the necessity of said adaptation - I’d much rather have a microtransaction system than either a subscription one (not ideal for creators) or an ad-based one (with all of the problems that it entails) - making the purchase decisions even acts as a built-in rate limiter to gently pressure me to spend less time on entertainment, and less time distractedly skimming pages, and more time carefully and thoughtfully reading longer and more useful articles. I think that such a system would have extremely useful second-order effects for our society at large, precisely because of the need to slow down to assess whether a piece of information or entertainment is worth your money (even though your time is usually much more valuable).
A true micropayment system would allow me to fund one “wallet” and buy stuff from MSFT, Nintendo, Ubisoft etc. Right now you need a separate account of each of them. That’s more a factor that their target audience usually doesn’t have access to credit cards than a sign of the success of micropayments.
I don’t understand the argument you’re making here.
I’m not claiming that the micropayment systems in video games are “real” or “general internet-scale” micropayment systems - obviously, they’re not - and I’m not claiming that micropayments are “successful”, because these video game systems are very different than the kind being discussed here.
Instead, I’m pointing out that the existence of these systems is proof that humans can use virtual/abstracted currencies to purchase intangible goods, which is basically what you need, cognitively, for a micropayment system to succeed, and I’m also trying to refute David’s claim that “People aren’t good at estimating tiny numbers multiplied by other numbers that they’re also bad at estimating.”
We should separate micropayments the technology from micropayments the interface. If we had the technology, there’s no reason you couldn’t have a client program divide a certain amount of donations per month between sites you use based on an algorithm of your choosing. Of course this arrangement is harder to profit off of because there is less room for shenanigans, but that also makes it better for everyone else.
Here’s a story from 13 years ago about Flattr and how it was trying to tackle micropayments. It had essentially that model - you prepaid and it divided the donations.
Isn’t this 100% GitHub’s fault? They’re allowing malicious actors to impersonate them on their own platform. Shouldn’t this expose GitHub to litigation?
The key bit in the article is this:
So, first the attackers somehow get the credentials from the user via some out of band mechanism. At this point, they are already able to do pretty much anything that the user can do. Possibly they have a limited-access token, but they can at least post PRs impersonating the repo owner. At that point, the PR appears to come from a bot that you’ve given trust to and so you trust it.
The real question is how did they get the PATs. If that’s a GitHub vulnerability, that’s bad. If that’s users leaking their credentials then users need to not do that, just as they also need to not leak their SSH private keys.
Expose… Microsoft… to litigation? BAHAHAHAHA
This is such a promising system, but it requires that banks to try out and adopt the system for it to be viable. So far from what I’ve heard, it is mostly student-run university snack kiosks that use it.
I realize this is an incredibly tall order, but does anyone know if there have even been experiments where FLOSS advocates run their own credit union in order to break new ground on stuff like this?
I didn’t know it was deployed anywhere yet. Any idea which universities?
The only one I’m aware of is Bern University of Applied Sciences which at least had one https://www.bfh.ch/en/research/reference-projects/gnu-taler-snackautomat/
I’ve also heard that these small scale deployments are something the Taler org wants make easier to do, so maybe there will be more of them in the future
Crypto while neat will never see full-hearted support from the likes of say the EU. The value add of Taler is that one can mint essentially an IOU, send it anonomysly, and then when the recipient wants to claim it then their bank nor my bank will have no way of knowing who at my bank sent the money / created the IOU. You can also tag the money with how it may be spent, should you want bar your child from buying smut for example. Then there is lots of double-spending protection and such.
Complaints about the screen resolution are a matter of aesthetics, unless you work on visual digital media. In practice, a low resolution is often easier to use because it doesn’t require you to adjust the scaling, which often doesn’t work for all programs.
That said, the X220 screen is pathetically short. The 4:3 ThinkPads are much more ergonomic, and the keyboards are better than the **20 models (even if they look similar). Unfortunately the earlier CPU can be limiting due to resource waste on modern websites, but it’s workable.
The ergonomics of modern thin computers are worse still than the X220. A thin laptop has a shorter base to begin with, and the thinness requires the hinges to pull the base of the top down when it’s opened, lowering the screen further. The result is that the bottom of the screen is a good inch lower than on a thick ThinkPad, inducing that much more forward bending in the user’s upper spine.
The top of the screen of my 15” T601 frankenpad is 10” above my table and 9.75” above the keyboard. Be jealous.
A matter of aesthetics if the script your language uses has a small number of easily distinguished glyphs.
As someone who frequently reads Chinese characters on a screen, smaller fonts on pre-Retina screens strain my eyes. The more complex characters (as well as moderately complex ones in bold) are literally just blobs of black pixels and you have to guess from the general shape and context :)
I strongly disagree here. I don’t notice much difference with images, but the difference in text rendering is huge. Not needing sub-pixel AA (with its associated blurriness) to avoid jagged text is a huge win and improves readability.
Good for you. Your eyesight is much, much better than mine.
I am typing on my right hand monitor right now. It is a 27-inch (1440 × 2560) Apple Thunderbolt Display.
My left screen is a Built-in Retina Display and it is 27-inch (5120 × 2880).
Left has 4x the pixel density of right.
At 30-40cm away, I can’t see any difference between them.
If I peer with my nose 2cm from the screen the left is marginally sharper but I would hate to have to distinguish them under duress.
I’d be pretty surprised by that, my eyesight is pretty terrible. That’s part of why high resolution monitors make such a difference. Blurry text from antialiasing is much harder for me to read and causes eye strain quite quickly. Even if I can’t see the pixels on lower resolution displays, I can’t focus as clearly on the outlines of characters and that makes reading harder.
I think you would be hard pressed to demonstrate a measurable difference in readability, accounting for what people are used to.
You accidentally double posted this comment.
thanks
As an X220 owner, while I concede someone may like the aesthetics of a low-resolution screen, the screen is quite bad in almost all other ways too. But you’re definitely right about aspect ratio. For terminal use a portrait 9:16 screen would be much better than 16:9. Of course external displays are better for ergonomics and nowadays large enough to work in landscape, too.
I was very fond of my X220 and would still use for it if it wasn’t stolen from me, but even at the time the display was disappointing and the trackpad was dreadful. It wouldn’t call it the best, certainly not now.
you’re not supposed to use the trackpad, you can disable it in the bios :)
“the feature is so bad that I recommend removing it” doesn’t scream “best laptop” to me.
Most laptops don’t even let you disable it, and provide no alternative. Can’t see how that’s better.
Most modern laptops have fully functional trackpads with gesture support. Apple’s even have proper haptics, so you can push the whole thing with a uniform click instead of the hinged designs most have. I used to be a TrackPoint diehard, but I don’t miss it after using a MacBook.
You still have to take your hands off the keyboard to use it though, right?
While I certainly prefer the ThinkPad TrackPoint over traditional trackpads, I must admit, the MacBook’s trackpad is surprisingly usable with just my thumbs and without my fingers leaving the home row.
There are countless laptops with a trackpoint and also a functioning trackpad.
Yeah, and the best of them are ThinkPads.
I think you are misreading the point here.
Unlike most other laptops, the Thinkpad comes with a superior input device: the trackpoint. It requires less finger movement and it has 3 mouse buttons. That is why many people, including me, simply disable the trackpad.
Quality multi touch trackpads and gestures are too good. I’ll never go back to a Lenovo laptop unless it has one.
Since at least the x40 generation (Haswell) it’s all been decent Synaptics multi-touch trackpads. Nothing extraordinary, but nothing bothersome either, more than fine.
In my experience this isn’t true (at least for the Framework), and the post doesn’t provide any proof for this claim.
I’ve owned a ThinkPad X230, which is almost the same as the X220 apart from the keyboard and slightly newer CPU. I currently own a Framework 13. Although I didn’t own them both at the same time, and I also have no proof for the counter-claim, in my experience the Framework is no more fragile than the X230 and I feel equally or more confident treating the Framework as “a device you can snatch up off your desk, whip into your travel bag and be on your way.”
(I remember the first week I had the X230 I cracked the plastic case because I was treating it at the same as the laptop it had replaced, a ThinkPad X61. The X61 really was a tank, there’s a lot to be said for metal outer cases…)
Confidence and security are subjective feelings, so if owning a chunky ThinkPad makes someone feel this way then good for them. Not to mention I think it’s awesome to keep older devices out of e-waste. However, I don’t think there’s any objective evidence that all newer laptops are automatically fragile because they’re thin - that’s perception as well.
I owned both a X220 and a X230 and I found the X230 to be much less durable than the X220, so the framework comparison might not quite stand up.
Oh that is good to know, thanks. I’d assumed they were mostly the same construction.
It’s a reasonable null hypothesis that a thicker chassis absorbs more shock before electronic components start breaking or crunching against each other. Maintaining the same drop resistance would require the newer components to be more durable than the older ones, which is the opposite of what I’d expect since the electronics are smaller and more densely packed.
How does the framework’s keyboard & trackpad measure up against the Thinkpad’s?
It’s been years since my x220 died, but IMO the trackpad on the framework is leaps and bounds better than the trackpad on the x220. (Though, the one caviat is that the physical click on my framework’s trackpad died which is a shame since I much prefer to have physical feedback for clicking. I really ought to figure out how hard that would be to fix.)
The x220’s keyboard is maybe slightly better, but I find just about any laptop keyboard to be “usable” and nothing more, so I’m probably not the right person to ask.
x220 keyboard is peak laptop keyboard
From my recollection: keyboard of the X230 about the same, trackpad of the Framework better (under Linux).
The X230 switched to the “chiclet” keyboard so it’s considered less nice than the X220 one (people literally swap the keyboard over and mod the BIOS to accept it). I think they are both decent keyboards for modern laptops, reasonable key travel, and don’t have any of the nasty flex or flimsiness of consumer laptop keyboards. But not the absolute greatest, either.
I remember the X230 trackpad being a total pain with spurious touches, palm detection, etc. None of that grief with the Framework, but that might also be seven-ish years of software development.
I’ve tried both.
The Framework’s input devices are, for me, very poor.
That’s sad. Despite his claim that he’ll be alright for many years, the responsible thing would be to transition leadership of all the projects he leads now.
he leads projects?
… by being funded via a heavy store tax from an eternally buggy mess of a proprietary app store whose main value add is their network effect, set of sketchy engagement APIs, DRM (as in ‘Digital Rights Management’ or ‘corporate sanctioned malware’ depending on your optics) mainly selling proprietary software.
Its main reasons for ‘contributing’ being part of a risk management strategy for breaking away and more directly competing with Microsoft, empowering specifically those FOSS projects that fits their narrative and promoting an architecture that is close to a carbon copy of its eventual end-game competitor. This time with more anti-cheat making its way should there be sufficient traction.
It is the Android story again on a smaller scale. How did that turn out last time, how many of the ‘contributions’ failed to generalise? or is it different this time because Valve is good because games? Colour me sceptic.
I think Valve as a company has a lot of problems (though the DRM is pretty mild and one of their lesser problems tbh) and the Steam Deck iffier of a product than people make it out to be, but they’re actually being a good citizen here. Yes, they’re funding the things relevant to them out of self-interest (i.e. case-insensitive FS, WaitForMultipleObjects API clone, HDR, Mesa work, etc.), but they’re working with upstreams like the kernel, Mesa, and freedesktop.org to get the work merged upstream, be properly reviewed by the people that work on it, and be usable for everyone else. Android never worked with upstreams until maintaining their own things in an opaquely developed fork became unsustainable.
(Sometimes I think they might be using a bit too much commodity - the Arch base of SteamOS 3 seems weird to me, especially since they’re throwing A/B root at it and using Flatpak for anything user visible…)
You only need mild DRM and DHCP for the intended barrier to entry, suppressive effect and legal instruments, anything more exotic is just to keep the hired blackhats happy.
If I’m going to be a bit more cynical - they are not going the FreeDesktop/Linux instead of Android/Linux route out of the goodness of their hearts as much as they simply lack access to enough capable system engineers of their own and the numbers left after Google then ODMs then Facebook/Meta then all the other embedded shops have had their fill aren’t enough to cover even the configuration management needs.
Take their ‘contributions’ in VR. Did we get even specs for the positioning system? activation / calibration for the HMD? or was that a multi-year expensive reversing effort trying to catch up and never actually being able to just to be able to freely tinker with hardware we payed for? that was for hardware they produced and sold in a time where open source wasn’t a hard sell by any means.
Did we get source code for the ‘Open’VR project that killed off others actually open ones? Nope, binary blob .so:s and headers. Ok, at least they followed the ID software beaten path of providing copyleft version of the iterations of the source engine so people can play around with ports, exploring rendering tech etc? Nope. If you have spotted the source on the ’hubs its because its a version that was stolen/leaked.
Surely the lauded SteamDeck was sufficiently opened and upstreamed into the kernel? Well not if you include the gamepad portions. It’s almost as if the contributions hit exactly that which fit their business case and happens to feed into the intended stack and little to nothing else. Don’t anthropomorphise the lawnmower and all that.
To me, it looks like Valve wanted to make a gaming console, and used Linux as a way to pull that off. If you’d told me 25 years ago that you’d be able to play Windows games on a Linux machine that was handheld, I’d have been blown away. To me it still seems almost miraculous. And they doing this while (as far as I know) fulfilling their obligations to the various licenses used in the Linux ecosystem.
Does the fact they’re doing this for commercial gain invalidate that?
I don’t know what you expected. They’re almost certainly not going to give you the crown jewels used to make the headset, but all the infrastructure work is far more useful as it benefits everyone, not just the people with a niche headset.
They patented and own the hardware, the binary blobs and sold the hardware devices at a hefty price tag. I’d expect for an average FOSS participant to integrate and enforce existing infrastructure and not vertically integrate a side band that locks you into their other products.
What’s iffy about the Steamdeck? I’ve considered buying it but haven’t pulled the trigger yet.
I own a Steamdeck for about a year now. No what idea why do people have problem with the size. My kids play on it and don’t complain, and we do have a Nintendo Switch (smaller) to compare. Sure it’s bigger, but I don’t think it’s a big deal. On top of it - with a dock, it works perfectly as a home console system, so the size matters even less.
I really enjoy it and recommend getting it if you’re thinking about it.
As an owner of both, Steam Deck is quite heavier than Switch, and could have used an OLED screen.
I don’t like the size and ergonomics. But I’m in the minority on that one; people with big hands especially seen to love it.
There are more abstract concerns regarding its place as a product (is it a PC or a console? whichever is more convenient to excuse a fault), but otherwise the device is a pretty good value. I just don’t game that much, and when I do, it’s a social thing.
Thanks. Now I have to consider the size of my hands in relation to other people’s, something I understand can be fraught.
FWIW, I have small hands (I joke they are “surgeon’s hands”) and I don’t have a problem with the SD.
It might have a problem depending on what input method you use. I have average sized hands and I like to use the trackpad for FPSes and strategy games. For the FPS case, reaching the trackpads and reaching the face buttons gets really annoying. You can map the grip buttons to the face buttons, but then you’re losing out there.
Even with big piano friendly hands the steamdeck ergonomics are hard. If you don’t have a big toy budget, testing someone else’s is highly recommended. I mostly use my three steamdecks for various debugging / UI / … experiments (nreal air glasses + dactyls and the deck tucked away somewhere). If Asus would be able to not be Asus for 5 minutes the ROG Ally would’ve been an easy winner for me.
And you wouldn’t consider Ally’s intended OS a problem?
I hacked mine to run Linux as I’ve done with all other devices I’ve used throughout the years, it didn’t boot WIndows once. As far as their ‘intentions’ - whatever laptops I have scattered, all of them came bundled with Windows ‘intended’ for that to be the used OS.
DSDT fixes and kernel config to get the Ally working was less effort than I had to do to get actual access to the controllers and sensors on the Steam Deck.
Fair enough. I admin RHEL for dayjob, and use Arch on my laptop, when getting SD I knew I’ll keep it more appliance-y than getting into fully custom OS. I just wanted something to play Persona 5 in bed.
It’s heavy. Significantly heavy. It took me a while to figure out how to use it in a way that didn’t quickly give me wrist fatigue/pain, and even now it’s not perfect.
Also Valve’s Deck Verified program is very flawed. It’s quite a bit better than nothing, but it’s flawed. The biggest (but not only) problem IMO is that a game that has a control scheme not optimized for controllers - but still fully supports controllers - can be marked Verified. As an example, Civ V and Civ VI both basically just use the trackpad like a mouse, and the other buttons have some random keybinds that are helpful. Now, those are basically keyboard-and-mouse games… so to a certain extent I totally get it. But I should be able to click into a list of things and use the joysticks or D-pad to scroll down the list. I can’t. Instead, I have to use the trackpad to position the cursor over the scroll bar, then hold right trigger, then scroll with my thumb. This is extremely unergonomic.
Right, it’s really chunky - I might use it more if they had a mini version. The only use for the portability is at home (i.e. on the porch). It’s not small enough I’d want to carry it in a bag if I commute, around town, or waiting for someone, and if I’m on vacation, the last thing I want to do is play video games instead of touch grass or spend time or people. If I really want to play a game, I’ll probably use the laptop I have (even if that restricts choice of game - because I have a Mac…). Again, not that much of a gamer, so it’s different values I guess.
I have normal sized male hands and my girlfriend has relatively small hands and both work very well. She was actually surprised how ergonomic the Steam Deck is given the size. Other than that I only got positive reactions to the ergonomics.
Right now you can install Steam on your regular desktop Linux system, throw in Lutris to get games from other stores and you are good to go. This has been so far the best year to turn your family into Linux users yet.
It is far from ideal, but still a great improvement. And if we manage to get up to – let’s say – 10% penetration in EU, this is going to help immensely to combat mandatory remote attestation and other totalitarian crap we are going to end up with if Microsoft, Apple and Google keep their almost absolute dominance.
I appreciate that both this comment and its parent make good points that are not necessarily in conflict. I would distill this as a call for “critical support” for Valve, to borrow a term from leftist politics.
I have to say that I have had far more luck managing non-Steam game installs within Steam (you can add an entry to it, with a path, and it will manage it as a Proton install if you’d like; you basically just use Steam as a launcher and Proton prefix manager) than via Lutris.
My opinion of Lutris is that it is a janky pile of hacked-together non-determinism which was developed over a long period of time over many, many versions of Wine, and over many, many GPU architectures and standards, and long before Proton existed… which miraculously may work for you, although often will require expert hand-holding. Avoid if you are new to Linux.
There is also the Heroic Games Launcher, which I would also recommend over Lutris: https://heroicgameslauncher.com/
Noted. But I have personally had zero issues and the ability to download and install GOG games is hard to replicate in Steam.
Their improvements to proton/wine have made it so I could go from loading windows once a day to play games to loading it once a month to play specific games. Like all other for-profit companies their motives are profit driven, but so far they are contributing in ways that are beneficial and compatible with the broader Linux ecosystem. Unlike Microsoft, Oracle, Google, and Amazon they don’t have incentive to take over a FOSS project, they just don’t want to rely on Windows. But we should always keep an eye out.
Getting games to work by default on Linux also makes it much easier for people interested in Linux to try it out and people not interested to use it when convenient, which is a win in my book.
Did you look at the slides or watch the video of the talk? All their contributions are upstreamed and applicable to more than just their usecase. Everything is available on Arch as well (before SteamOS was released they actually recommend Manjaro, because they are so similar). You can use Proton for the Epic games store or other Windows apps. Of course they are doing this in self interest, but according to Greg Kroah Hartman and a lot of other kernel maintainers this isn’t a bad thing. The Steam Deck is first „real“ consumer Linux computer which has been sold over a million times. I hope more Linux handhelds are being released in the coming years :)
How dare they upstream improvements because of their malicious profit driven motivations!
The obvious irony here is that is in Valve’s best interest for their stuff to be upstreamed. It’s not like they can fork KDE (for example since it’s used by SteamOS) and maintain & support their own fork.
I don’t see why they couldn’t fork KDE? I see many reasons why they’d prefer not to.
I used to be very against stale bots until a project I started gained some traction.
At first, I was excited for every issue and PR, but as the number of users grew, so did the issues and PRs. Some of them there was just no way I could handle (for example, they only happen on macOS, and I have no way to afford a Mac). I left them open out of respect, but it definitely demoralized me to see issue numbers pile up.
After a certain amount of time, I just burnt out. I wasn’t checking issues or PRs, because I felt that if I replied to one, I should at least have the decency of looking at the others. I mean, these people took time out of their day to contribute to something I made, and I ignore their issue in favor of some other random one just because that’s the one that happened to be at the top of my GH notifications? So I just stopped looking at them, and they kept piling on and on to the point where it was completely unmanageable.
Thankfully I never had to resort to stale bots. Some dedicated users reached out and I made them maintainers, and they’re the ones taking (good!) care of the project now. I even moved it to the nix-community organization so it was clear it was no longer just “my” thing.
Still, I can definitely empathize with those who use stale bots. If I had that set up, I probably wouldn’t have burnt out so quickly. I know it might feel disrespectful to close the issue after some time, but I feel it’s even more disrespectful to just ignore everything completely due to the number of issues. (Similar thing happened recently with lodash, who closed every issue and PR by declaring issue bankruptcy)
More than anything, this highlights how incredibly underbaked GitHub Issues is as a bug tracking platform. There are any number of reasons that an issue may remain open; e.g., it has not yet been triaged, or it has been triaged and found to be low priority (e.g., if you aren’t actively supporting Mac users, perhaps any issue found on a Mac is low priority), or it’s frankly just not super important when compared to fixing critical defects and security issues. It should be possible to focus your attention on just the high priority, actionable issues, without being overwhelmed by the total open issue count.
It’s not unhealthy for a project to have a backlog of issues that may remain open indefinitely, reflecting the fact that those issues are lower in priority than the things you’re getting around to fixing. Closing issues really ought to be reserved for an active refusal to make a particular change, or if something that’s reported is not actually a bug.
I fully agree, it’s also definitely why I never closed any of those issues - I would love for them to get fixed! The first time I actually closed a contribution, it was a PR that contributed a ton of stuff, but it was just so disorganized (it really should have been split up into 15+ PRs) that it would take me weeks to get over all of it. Still, it took me over a month to own up to the fact that I was never going to be able to merge it in that state, and I felt horrible about it at the time. They took a ton of time to make a bunch of contributions to my project! And I just refused them. Still feel kind of sad I couldn’t find a better way to merge that. Long-term, though, it definitely improved my mental health (which was at an all-time-low at the time).
Thanks for sharing this, and I can understand the urge to close those issues, fix them and keep the number down. But I also learned to just live with it. Maybe some people will have to get into the same position first to understand why a maintainer spreads their attention so selective, but they will eventually. The other option is to just give up and then nothing progresses. I’ve had issues where I don’t have the hardware for or time, and eventually someone came around to work on it.
You did the right thing. The difficulty of reviewing a PR scales with something like O(lines^2) * overall complexity of the diff. And if someone is willing to put that much work into a PR, they should be willing to put in the extra time to make it possible to review. Otherwise I’d suspect they were any of several bad characters:
Why not mark the issues as “dormant”, with a short description of why it’s not feasible for the maintainer to address them (“I don’t have a Mac”)?
That still has the problem of “I need to go look through every issue” which can be challenging at times (burn-out, ADHD, simply too many issues…).
I think this might actually be one of the most reasonable uses of a stale bot - just mark it as “stale”, but don’t do anything else. That actually signals to the maintainers that these should probably get looked at!
I mean if the stale bot is closing issues without anyone looking at them, then it could be closing issues that are both important and easy to fix. Then any sense of being on top of things that you get from seeing fewer open issues would be false, and worse you are pissing off people who open issues that are closed for no good reason.
I don’t see the point. Couldn’t maintainers just sort open issues in chronological order of the most recent activity?
For your example. Wouldn’t it make sense to mark them as macOS and simply ignore them and only review something with code passing tests there?
I think it’s okay to have stuff you cannot work on and I think it’s wrong to assume that you get everything just because you write about it.
Maybe I am overly focusing on that example, but I don’t think this somehow gets solved by a stale bot and you didn’t use one for a reason.
I think that it makes sense to make clear what a user should expect. You can clarify that you won’t support a certain platform, you can clarify that you cannot support it, but accept patches. And you can certainly make clear if you don’t want to use your issue tracker to give support.
It’s fine, it’s not rude. Of course being on a non-supported platform can be a bummer and as someone who often uses more exotic things I hope that doesn’t happen and I am always happy when someone at least would welcome patches.
I think stale bots are rude for the reason you mention. If you have an open bug tracker and I contribute something sensible having a bot close it down doesn’t feel like the nicest thing. Of course there might be “I want this” and “How to do this” issues, especially on GitHub, and they tend to be closed, but that’s not what the stale bot does. It’s worse even, it kind of strengthens that attitude since people who write “I still want this”, “When is this done”, “This is important”, “Your project sucks without this”, etc. will prevent the issue from going stale.
Stale bots don’t seem to be a good solution here, since it feels like they do the opposite of what you want.
Do people feel accomplished when bots close issues for more often than not no reason at all?
And yes, the GitHub issue tracker is bad. People get creative though with templates. Maybe that’s a better way? Maybe another way would be using an external issue tracker.
I agree with what you’re saying, and the quality of issues/PRs improved significantly once I added some basic templates.
Still, while I don’t particularly like stale bots and wouldn’t put them on my projects, I just meant to say I get why people reach for them. Not that I particularly agree with bots who just close issues independent of triage status.
But yes, I think GitHub’s fairly simplistic issue tractor just makes the whole problem a lot worse. But I also think that a project would need to reach a certain scale before an external issue tracker is justified. I mean, I have no idea how I’d submit a bug report to Firefox. Some random library I found on GitHub though? Sure, just open an issue. There’s a lot of value in that.
Is this an argument? Mobile editing is dog shit. It’s just awful top to bottom. I can’t believe we’re 15 years into iOS, and they still don’t have frigging arrow keys let alone actually useable text editing. Almost daily, I try to edit a URL in the mobile Safari and I mutter that every UX engineer at Apple should be fired.
You know the UX engineers on the Safari team would just love to not have to expose the URL at all…
I don’t really know why you’re singling out Safari, when Google/Chrome have a long history of actually trying to get rid of displaying URLs. And it’s been driven not by “UX engineers”, but primarily by their security team.
For example:
https://www.wired.com/story/google-chrome-kill-url-first-steps/
(and to be perfectly honest, they’re right that URLs are an awful and confusing abstraction which cause tons of issues, including security problems, and that it would be nice to replace them… the problem is that none of the potential replacements are good enough to fill in)
Both Apple and Google suck. What’s your point?
My point is that I’m not aware of Apple, or “UX engineers on the Safari team”, being the driving force behind trying to eliminate URLs, and that we should strive for accuracy when making claims about such things.
Do you disagree?
No one claimed that Safari is the driving force for anything. A commenter just brought it up as a source of personal annoyance for them.
Shrug! Android Play Store, the app, does this. Terrifying! It breaks the chain of trust: Reputable app makers link to an url (thankfully, it’s still a website), but you have to use the app anyway to install anything, which has nowhere to paste the url, let alone see it, so you can’t see if you are installing the legit thing or not. Other than trust their search ranking, the best you can do is compare the content by eye with the website (which doesn’t actually look the same).
I’m reluctant to install third-party apps in general, but, when I do, preserving a chain of trust seems possible for me: if I click a link to, say, https://play.google.com/store/apps/details?id=com.urbandroid.sleep on Android, it opens in the Play Store app; and, if I open such a URL in a Web browser (and I’m signed in to Google), there’s a button to have my Android device install the app. Does either of those work for you?
Wow! That did not work in Firefox just one month ago (when I had to install Ruter on my new phone). Now it does. I tried Vivaldi too, and it doesn’t even ask whether I want to open it in Google Play.
Browser devs to the rescue, I guess, but as long as the app isn’t doing their part – linking to the website – the trust only goes one way.
The upside: it reduces the amount of time you want to use your phone, which, for most people, is a good thing.
Does it though? I mean, you’ll spend much longer fiddling to get the text right!
If you think “oh this’ll just be a quick reply” and then end up actually typing more than you thought you would, it makes sense to finish the job you started on mobile, which then actually takes more time. Especially when you’re on the go and you have no laptop with you.
It really just means I use the phone for composing conceptually light things because I don’t want to mess with it any more than necessary. (This is likely an adaptation to the current state versus a defense of how it is.)
I don’t miss arrow keys with iOS Trackpad Mode[1]. The regular text selection method is crap, but it works well enough doing it via Trackpad Mode.
I think part of the problem with the iOS Safari URL bar is that Apple tries to be “smart” and modifies the autocorrect behavior while editing the URL, which in my case, ends up backfiring a whole lot. There’s no option to shut it off, though.
Wow, I had no idea this existed! Apple’s iOS discoverability is atrocious.
Agreed. Just the other day I found the on screen keyboard on my iPad was floating and I couldn’t figure out how to make it full size again without closing the app. A few days later I had the thought to try to “zoom” out on the keyboard with two fingers and it snapped back into place!
As someone more comfortable with a keyboard and mouse, I often look for a button or menu. When I step back and think about how something might be designed touch first, the iOS UX often makes sense. I just wish I had fewer “how did I not know that before!” moments.
I mean, what meaningful way is there to make it discoverable? You can’t really make a button for everything on a phone.
One other commonly unknown “trick” on ios is that clicking the top bar often works as a HOME key on desktops, but again, I fail to see an easy way to “market” it, besides clippy, or some other annoying tutorial.
Actually, the ‘Tips’ app could actually have these listed instead of the regular useless content. But I do think that we really should make a distinction between expert usage and novices and both should be able to use the phone.
I really don’t have an answer to that. I’ve never looked through the Tips app, not have I been very active in reading iOS-related news[1]. Usually I just go along until I find a pain point that’s too much and then I try to search for a solution or, more often, suffer through it.
[1] I do enjoy the ATP podcast, but the episodes around major Apple events are insufferable as each host casually drops $2,000 or more on brand new hardware, kind of belying their everyman image.
The other problem I encounter near daily is not being about to edit the title of a Lobsters post on the phone. It really sucks.
The far more frustrating thing on lobste.rs is that the Apple on-screen keyboard has no back-tick button. On a ‘pro’ device (iPad Pro), they have an emoji button but not the thing I need for editing Markdown. I end up having to copy and paste it from the ‘Markdown formatting available’ link. I wish lobste.rs would detect iOS clients and add a button to insert a backtick into the comment field next to the {post,preview,cancel} set.
Long-press on the single-quote key and you should get a popup with grave, acute etc accents. I use the grave accent (the one on the far left) for the backtick character.
Edit testing if
this actually works
. It does!Thank you! As someone else pointed out in this thread, iOS is not great for discovery. I tried searching the web for this and all of the advice I found involved copying and pasting.
This is a general mechanism used to (among other things) input non english letters: https://support.apple.com/guide/ipad/enter-characters-with-diacritical-marks-ipadb05adc28/ipados
Oddly enough, I knew about it for entering non-English letters and have used it to enter accents. It never occurred to me that backtick would be hidden under single quote.
You can make a backtick by
holding down
on single quote until backtick pops up, but it’s pretty slow going.This seems super useful, but I’ve spent the last ten minutes trying to get it to
It seems either that my phone’s touchscreen is old and inaccurate or I am just really dang bad at using these “newfangled” features.
I agree with your other reply - discoverability is atrocious. I learned that you can double/triple tap the back of your phone to engage an option which blew my mind. I wonder what I’m missing out on by not ever using 3D touch…
Samesies. The funniest bit, at least for me, is that I’m usually just trying to remove levels of the path, or just get back to the raw domain (usually because autocomplete is bizarre sometimes). This would be SUCH an easy affordance to provide since URLs already have structure built-in!
You may already know about this, but if you put the cursor in a text field, and then hold down on the space bar, after a second or two you enter a mode that lets you move the cursor around pretty quickly and accurately.
edit: I guess this is the “trackpad mode” mentioned below by /u/codejake
I find the trick of pressing down on spacebar to move the cursor works pretty well.
It’s okay but it’s still not as good as digital input for precision.
The problem is that Apple phones don’t have buttons.
No phones do anymore, it seems…
Arthur C Clarke predicted this in The City And The Stars. In its insanely-far-future society there is a dictum that “no machine shall have any moving parts.”
I wish people would be a little pickier about which predictions they implement and maybe skip the ones made in stories with a dystopian setting. Couldn’t we have sticked to nice predictions, like geostationary satellites?
It’s hidden, but… tap url bar, then hold down space and move cursor to where you want to edit. Now normal actions work ( e.g. double tap to select a word).
That said I agree with your second sentence.
The trackpad mode works very poorly on the iPhone SE because you can’t move down since there’s no buffer under the space key, unlike the newer phone types. It doesn’t work well for URLs because the text goes off screen to the right, and it moves very slowly. Ironically I’m on an iPad and I just tried to insert “well” into the last sentence and the trackpad mode put the cursor into the wrong place just as I released my tap. It just sucks. This is not a viable text editing method.
@pushcx wonder why this is allowed when product announcements are considered off-topic?
What gerikson said, and tradition.
I don’t understand the part about there being one convenient post to hide. This story was also a single post, and equally convenient to hide. Is there more to it, or should I just take the “tradition” part as the answer?
Are there any other companies which are excepted from the policy against product announcements?
Apple events tend to generate a ton of followup posts which might or might not be considered topical to this site. If an event does that, those submissions can be folded into the main event post, minimizing annoyance for those that are hiding that post.
Corpspam submissions via press release (like the Mullvad example) are more random. Individual members hiding these is not a strong signal that these are unwelcome here. Removing them via mod action is.
Mullvad announcements are more random than Apple events?
I’m gonna go out on a limb here and say “yes”. If you want to graph the two against each other along a time axis and prove me wrong, knock yourself out.
Do you have any special insight into the moderation policies of lobste.rs? I asked @pushcx because he is the one doing the moderating.
I would phrase it differently, but I agree. Allowing the Apple announcements is, yeah, sort of a “heckler’s promo” for a popular topic. They’re pretty wide-ranging and shape the direction our field develops in. The Mullvad story was a 180 word press release with negligible technical info. If they wanted to release their image or write a few thousand words about things they learned along the way this multi-year technically-demanding project, it’d be topical and welcome, I’d be upvoting the post.
Fundamentally, I want links that prompt informative, creative discussions in a healthy community. The Apple announcements are fine for that. Small businesses mentioning minor product enhancements almost never do.
Ironically, the Mullvad submission contained a link to an older post where they announced they were going to move to a RAM based Linux distro, and which had a bunch of links to the software they were using. Absolutely on-topic.
Had the removed submission contained that sort of information I would not have flagged it as spam in good conscience.
So there’s one convenient post to hide if you’re not interested in Apple announcements.
I will never switch to Wayland and will never port my applications to use it. I’d sooner switch to Windows and just drop Linux support entirely. (Though odds are it won’t come to that, as X actually works really very well despite, or perhaps thanks to, its relative lack of git activity.)
There’s a narrow line between pragmatism and dogmatism, this comment (especially without elaborating on why) seems to veer heavily into the latter. What reasons would you “rather switch to Windows and just drop Linux support entirely”, or do you just enjoy screwing over users who don’t use the technologies you like? Is there anything actionable devs can take from your stances, or are you just venting?
Comments like this do little but spark more vitriol in what is already a contentious debate.
I’m trying not to beat the dead horse - I’ve gone into why in many comments on other threads, and this one is a specific call to action to port things.
Porting things to Wayland is actually an enormous amount of work. Especially for me personally as a user, since it means doing something about my window manager, my beloved taskbar, every little detail of my workflow in addition to my application toolkit…. all for negative benefit; things work worse on Wayland than they already do today on X. And this situation is not likely to change for a long time; indeed, I expect X will continue to actually work adequately for ages.
Open source works best when people act out of their own rational self interest then share the result with others because it costs them near nothing to do so - I made this for me, then shared it with the hopes that it might be useful but without warranty of any kind, etc.
Switching to Wayland costs me a lot. What does it get me? And the constant streams of falsehoods out of Wayland proponents irks me to such a point where I don’t want to give them anything. Maybe if they started telling the truth instead just trying to shove this junk down my throat, I’d give them some slack. But zero respect has been earned.
Your arguments about needing to change your window manager, taskbar, etc. are reasonable (desktop environment users can mostly be transparently migrated over, those of us on tilers had to change a lot), and yes, there’s development costs. But the same can be said about keeping up with updates to, say, core libraries (GTK updates, OpenSSL updates, whatever). Keeping software running as times and tools evolve is always going to be work.
(tangent below)
I will say, “things work worse on Wayland than they already do today on X” is flat-out false for my usecases and something I hear parroted constantly and hasn’t, since 2018 when I switched to Wayland full-time (back when it was beta-ware at best), been true for my usecases at all. I run (sometimes, at least) mixed DPI monitors (i.e. a normal DPI external monitor at 100% scale, but a laptop panel at 125-200% scale), which is something Xorg notoriously can’t handle in a reasonable way (there’s toolkit-level hacks, if I’m willing to only use QT apps, which while I wish I could, Firefox and all Electron apps are GTK). Every time I play with Xorg again to see what folks are on about, I have so much screen tearing and flickering and general glitching that it distracts me from watching videos or playing games. Last I checked (this may have changed), Firefox didn’t support VA-API hardware acceleration for YouTube videos on Xorg, only on Wayland, which directly costs me CPU cycles and thus battery life on portable devices. Xorg (and all window managers I’ve ever used on it) allows a window to fully claim control of the screen in a way the window manager can’t override, so if a game selects the wrong fullscreen resolution, I’m stuck - potentially so stuck that I need to run
pkill
on the process from a VT after cracking out Ctrl-Alt-F2.So… I mean, look, I’m glad Xorg works for you, and works for enough other folks that some projects like OpenBSD are devoutly sticking to it. That’s the beauty of open-source, we can use what works for us. But if we’re just sharing anecdotes and throwing words around, there’s my set: Xorg is unbelievably broken to me as an end-user (nevermind as a developer, I’m looking at this solely from the lens of my “normie” usecases). I will accept Wayland’s lack of global key shortcuts and patchy-at-best screensharing abilities (which are slowly improving) over literally any experience Xorg offers these days.
And so your last paragraph about “constant streams of falsehoods” and “shove this junk down my throat” confuse me. What falsehoods? The Wayland folks say it works, and wow, does it ever, as long as I don’t need a few specific workflows. And if I need those, then Xorg is right there, at least until it runs out of maintainers. Why the virtiol and hate? I see tons of truth in the space; maybe read better/less inflammatory articles? Check out the work emersion and a few others are doing while maintaining wlroots. They seem like fairly straight-shooters to me from anything I’ve read.
When somebody tells you something about their own experience, I don’t think that counts as “parroting”.
Why indeed. I think you’ll miss a trick if you dismiss it. Wayland feels like systemd in this respect.
They’re also similar in that they both seem to be instantiations of the old “something must be done; this is something; therefore this must be done.”
For what it’s worth, I pick and choose the technologies that work for me. Wayland yes, systemd no, Pipewire yes, Flatpak absolutely the hell not. I understand where your sentiment in that last sentence comes from, but I’m not sure I ascribe it to all of the modern Linux renovation projects. Some (eg. Pipewire) seem quite well thought-out, some (Wayland) seem… uh, maybe under-specified to start and rushed out, but mostly recoverable, some (systemd) I think had far, far too much scope in a single project, and some I just outright disagree with the UX considerations of (Flatpak). Again: I’m glad we get to pick and choose in this niche, to some degree (you can argue this only really exists on certain distribuitions like Gentoo and Void and not be horribly incorrect…)
I don’t believe that’s true. For example twm keeps full screen clients in their own window which you can resize and move around.
This would be a pleasant surprise of a thing to learn that I’m wrong about. It might also vary game to game? I seem to recall some games taking exclusive control of the rendering plane, but if a WM is capable of fixing that, then that solves that point to be equal with my Wayland experience (where “fullscreen” is a lie the compositor makes to the client, and I can always Super-F my way back down to a tiled window)
I believe WMs which allow this functionality are not EWMH compliant, but there is nothing forcing a WM to do it one way or another. Try it with twm and see if you experience the pleasant surprise that you hypothesize.
I’m afraid your interlocutor is correct on this point. Under X11, screen locking apps have no special permissions, so it follows that if you screen saver can take over your screen so can any other app. Compliance with EWMH is completely optional for your window manager or any other X11 client.
That being said I’ll still be using X11 until I can’t run a modern web browser or Steam with it and or Debian drops it from stable. X11 is a crazy mess, but it’s the crazy mess I know and love.
Or maybe I’ll switch to Arcan.
You’re right, clients can bypass the window manager using the override-redirect flag. I wonder if games use that flag in practice – full screen programs generally don’t, but games might use it for better performance.
I’ve interacted with the LLVM project only once (an attempt to add a new clang diagnostic), and my experience with Phabricator was a bit painful (in particular, the arcanist tool). Switching to GitHub will certainly reduce friction for (new) contributors.
However, it’s dismaying to see GitHub capture even more critical software infrastructure. LLVM is a huge and enormously impactful project. GitHub has many eggs in their basket. The centralization of so much of the software engineering industry into a single mega-corp-owned entity makes me more than a little uneasy.
There are so many alternatives they could have chosen if they wanted the pull/merge request model. It really is a shame they ended up where they did. I’d love to delete my Microsoft GitHub account just like I deleted my Microsoft LinkedIn account, but the lock-ins all of these projects takes means to participate in open source, I need to keep a proprietary account training on all of our data, upselling things we don’t need, & making a code forge a social media platform with reactions + green graphs to induce anxiety + READMEs you can’t read anymore since it’s all about marketing (inside their GUI) + Sponsors which should be good but they’re skimming their cut of course + etc..
If even 1% of the energy that’s spent on shaming and scolding open-source maintainers for picking the “wrong” infrastructure was instead diverted into making the “right” infrastructure better, this would not be a problem.
Have you used them? They’re all pretty feature complete. The only difference really is alternatives aren’t a social network like Microsoft GitHub & don’t have network effect.
It’s the same with chat apps—they can all send messages, voice/video/images, replies/threads. There’s no reason to be stuck with WhatsApp, Messenger, Telegram, but people do since their network is there. So you need to get the network to move.
And open-source collaboration is, in fact, a social activity. Thus suggests an area where alternatives need to be focusing some time and effort, rather than (again) scolding and shaming already-overworked maintainers who are simply going where the collaborators are.
Breaking out the word “social” from “social media” isn’t even talking about the same thing. It’s social network ala Facebook/Twitter with folks focusing on how many stars, how green their activity bars are, how flashy their RENDERME.md file is, scrolling feeds, avatars, Explore—all to keep you on the platform. And as a result you can hear anxiety in many developers on how their Microsoft GitHub profile looks—as much as you hear folks obsessing about their TikTok or Instagram comments. That social anxiety should have little place in software.
Microsoft GitHub’s collaboration system isn’t special & doesn’t even offer a basic feature like threading, replying to a inline-code comment via email puts a new reply on the whole merit request, and there are other bugs. For collaboration, almost all of alternatives have a ticketing system, with some having Kanban, & additional features—but even then, a dedicated (hopefully integrated) ticketing system, forum, mailing list, or libre chat option can offer a better, tailored experience.
Suggesting open source dogfood on open source leads to better open source & more contributions rather than allowing profit-driven entities to try to gobble up the space. In the case of these closed platforms you as a maintainer are blocking off an entire part of your community that values privacy/freedom or those blocked by sanctions while helping centralization. The alternatives are in the good-to-good-enough category so there’s nothing to lose and opens up collaboration to a larger audience.
But I’ll leave you with a quote
— Matt Lee, https://www.linuxjournal.com/content/opinion-github-vs-gitlab
The population of potential collaborators who self-select out of GitHub for “privacy/freedom”, or “those blocked by sanctions”, is far smaller than the population who actually are on GitHub. So if your goal is to make an appeal based on size of community, be aware that GitHub wins in approximately the same way that the sun outshines a candle.
And even in decentralized protocols, centralization onto one, or at most a few, hosts is a pretty much inevitable result of social forces. We see the same thing right now with federated/decentralized social media – a few big instances are picking up basically all the users.
There is no number of quotes that will change the status quo. You could supply one hundred million billion trillion quadrillion octillion duodecillion vigintillion Stallman-esque lectures per femtosecond about the obvious moral superiority of your preference, and win over zero users in doing so. In fact, the more you moralize and scold the less likely you are to win over anyone.
If you genuinely want your preferred type of code host to win, you will have to, sooner or later, grapple with the fact that your strategy is not just wrong, but fundamentally does not grasp why your preferences lost.
Some folks do have a sense of morality to the decisions they make. There are always trade offs, but I fundamentally do not agree that the tradeoffs for Microsoft GitHub outweigh the issue of using it. Following the crowd is less something I’m interested in than being the change I & others would like to see. Sometimes I have ran into maintainers who would like to switch but are afraid of if folks would follow them & are then reassured that the project & collaboration will continue. I see a lot of positive collaboration on SourceHut ‘despite’ not having the social features and doing collaboration via email + IRC & it’s really cool. It’s possible to overthrow the status quo—and if the status quo is controlled by a US megacorp, yeah, let’s see that change.
But this is a misleading statement at best. Suppose that on Platform A there are one million active collaborators, and on Platform B there are ten. Sure, technically “collaboration will continue” if a project moves to Platform B, but it will be massively reduced by doing so.
And many projects simply cannot afford that. So, again, your approach is going to fail to convert people to your preferred platforms.
I don’t see caring about user privacy/freedoms & shunning corporate control as merely a preference like choosing a flavor of jam at the market. And if folks aren’t voicing an opinion, then the status quo would remain.
You seem to see it as a stark binary where you either have it or you don’t. Most people view it as a spectrum on which they make tradeoffs.
Already mentioned it. This case is a clear ‘not worth it’ because the alternatives are sufficient & the social network part is more harmful than good.
I think you underestimate the extent to which social features get and keep people engaged, and that the general refusal of alternatives to embrace the social nature of software development is a major reason why they fail to “convert” people from existing popular options like GitHub.
To clarify, are you saying that social gamification features like stars and colored activity bars are part of the “social nature of software development” which must be embraced?
Would you clarify?
And yet here you are, shaming and scolding.
What alternatives do you have in mind?
Assuming they wanted to move specifically to Git & not a different DVCS, LLVM probably would have the resources to run a self-hosted Forgejo instance (what ‘powers’ Codeberg). Forgejo supports that pull/merge request model—and they are working on the ForgeFed protocol which would as a bonus allow federation support which means folks wouldn’t even have to create an account to open issues & participate in merge requests which is a common criticism of these platforms (i.e. moving from closed, proprietary, megacorp Microsoft GitHub to open-core, publicly-traded, VC-funded GitLab is in many ways a lateral move at the present even if self-hosted since an account is still required). If pull/merge request + Git isn’t a requirement, there are more options.
How do they manage to require you to make an account for self-hosted GitLab? Is there a fork that removes that requirement?
Self-hosting GitLab does not require any connection to GitLab computers. There is no need to create an account at GitLab to use a self-hosted GitLab instance. I’ve no idea where this assertion comes from.
One does need an account to contribute on a GitLab instance. There is integration with authentication services.
Alternatively, one could wait for the federated protocol.
In my personal, GitHub-avoiding, experience, I’ve found that using mail to contribute usually works.
That’s what I meant… account required for the instance. With ForgeFed & mailing lists, no account on the instance is required. But news 1–2 weeks ago was trying to get some form of federation to GitLab. It was likely a complaint about needing to create accounts on all of the self-hosted options.
I think the core thing is that projects aren’t in the “maintain a forge” business, but the “develop a software project” business. Self-hosting is not something they want to be doing, as you can see by the maintenance tasks mentioned the article.
Of course, then the question is, why GitHub instead of some other managed service? It might be network effect, but honestly, it’s probably because it actually works mostly pretty well - that’s how it grew without a network effect in the first place. (Especially on a UX level. I did not like having to deal with Phabricator and Gerrit last time I worked with a project using those.)
I would not be surprised if GitHub actively courted them as hostees. It’s a big feather in GH’s cap and reinforces the idea that GH == open source development.
I think the move started on our side, but GitHub was incredibly supportive. They added a couple of new features that were deal breakers and they waived the repo size limits.
There is Codeberg & others running Forgejo/Gitea as well as SourceHut & GitLab which are all Git options without needing Microsoft GitHub or self-hosting. There are others for non-Git DVCSs. The Microsoft GitHub UI is slow, breaks all my browser shortcuts, and has upsell ads all throughout. We aren’t limited to
if not MicrosoftGitHub then SelfHost
.This is literally what I addressed in the second paragraph of my comment.
Not arguing against you, but with you showing examples.
I’m astonished by how often this mistake is repeated. I’ve been yelling into the void about it for what feels like an eternity, but I’ll yell once more, here and now: JSON doesn’t define, specify, guarantee, or even in practice reliably offer any kind of stable, deterministic, or (ha!) bijective encoding. Which means any signature you make on a JSON payload is never gonna be sound. You can’t sign JSON.
If you want to enforce some kind of canonicalization of JSON bytes, that’s fine!! and you can (maybe) sign those bytes. But that means that those bytes are no longer JSON! They’re a separate protocol, or type, or whatever, which is subject to the rules of your canonical spec. You can’t send them over HTTP with Content-Type: application/json, you can’t parse them with a JSON parser, etc. etc. with the assumption that the payload will be stable over time and space.
Oh god, I thought we had learned our lesson from Secure Scuttlebutt. Come on people.
For anyone else who didn’t know what this referred to, a bit of searching led me to this post, which I did a Find for “JSON” in.
Edit: adding quotes around “JSON”.
canonical json is actually pretty well defined in some matrix spec appendix if i recall?
Matrix Specification - Appendices § 3.1. Canonical JSON. I haven’t reviewed to see just how “canonical” it is/whether it truly excludes all but one interpretations/productions of a given object etc., but that’s been part of the spec since no later than v1.1 (November 2021), maybe earlier.
Doesn’t https://www.rfc-editor.org/rfc/rfc8785 specify a good enough canonical form?
It’s a perfectly lovely canonical form, but it’s not mandatory. JSON parsers will still happily accept any other non-canonical form, as long as it remains spec-compliant. Which means the JSON payloads
{"a":1}
and{ "a": 1 }
represent exactly the same value, and that parsers must treat them as equivalent.If you want well-defined and deterministic encoding, which produces payloads that can be e.g. signed, then you need guarantees at the spec level, like what e.g. CBOR provides. There are others. (Protobuf is explicitly not one!!)
Of course other forms are equivalent, but only one is canonical. That’s what the word means.
Sure, if you want to parse received JSON payloads and then re-encode them in your canonical form, you can trust that output to be stable. Just as long as you don’t sign the payload you received directly!
That reminds me, I wonder what the status is on low bandwidth Matrix, which uses CBOR.
When I read about it, I was wondering why a high bandwidth Matrix would be the default, if they do the same thing. Now I wonder for more reasons.
Can you say more about Protobuf not guaranteeing a deterministic encoding at the spec level? Is it that the encoding is deterministic for a given library, but this is left to the implementation rather than the spec? Does the spec say something about purposely leaving this open?
Protobuf encoding is explicitly defined to be nondeterministic and unstable.
https://protobuf.dev/programming-guides/encoding/#order
https://protobuf.dev/programming-guides/dos-donts/#serialization-stability
Needless to say, you should never sign a protobuf payload :)
Thanks I think I’m gonna use Rivest’s S-expressions.
The implementation is explicitly allowed to have a deterministic order.
At the spec level it’s undefined.
At the implementation level, it may be defined. That’s typical for all such technologies across the industry.
An important detail was omitted.
Yes, which is my point — unless you’re operating in a hermetically sealed environment, senders can’t assume anything about the implementation of receivers, and vice versa. You can maybe rely on the same implementation in an e.g. unit test, but not in a running process. The only guarantees that can be assumed in general are those established by the spec.
Exact same thing here — modulo hermetically sealed environments, senders can’t assume anything about the build used by receivers, and vice versa.
Tangentially, reading the spec,
JSON is UTF-8, not UTF-16.
That spec should specify Unicode order (Unicode, ASCII, and UTF-8, UTF-32 all share sorting order), not UTF-16 as UTF-16 has some code points out of order. That was one of the reasons why UTF-8 was created.
Also, we don’t sort JSON object keys for cryptography. Order is inherited from the UTF-8 serialization for verification. Afterward, the object may be be unmarshalled however seen fit. This allows arbitrary order.
One does not sign JSON, one signs a bytearray. That multiple JSON serializations can have the same content does not matter. One could even argue that it’s a feature: the hash of the bytearray is less predictable which makes it more secure.
I do not get the hangup on canonicalization. Just keep the original bytearray with the signature: done.
Lower in this thread a base64 encoding is proposed. Nonsense, just use the bytearray of the message. What the internal format is, is irrelevant. It might be JSON-LD, RDF/XML, Turtle, it does not matter for the validity of the signature. The signature applies to the bytearray: this specific serialization.
Trying to deal with canonicalization is a non-productive intellectual hobby that makes specifications far too long, complex and error prone. It hinders adoption of digital signatures.
A JSON payload (byte array) is explicitly not guaranteed to be consistent between sender and receiver.
This is very difficult to enforce in practice, for JSON payloads particularly.
Of course a bytearray is consistent. There’s a bytearray. It has a hash. The bytearray can be digitally signed. Perhaps the bytearray can be parsed as a JSON document. That makes it a digitally signed JSON document. It’s very simple.
Data sent from sender to receiver is sent as a bytearray. The signature will remain valid for the bytearray. Just don’t try to parse and serialize it and hope to get back the same bytearray. That’s a pointless exercise. Why would you do that? If you know it will not work, don’t do it. Keep the bytearray.
What is hard to enforce? When I send someone a bytearray with a digital signature, they can check the signature. If they want to play some convoluted exercise of parsing, normalizing, serializing and hoping for the same bytearray, you can do so, but don’t write such silliness in specifications. It just makes them fragile.
Sending bytearrays is not hard to do, it’s all that computers do. Even in browsers, there is access to the bytearray.
Canonicalization is immature optimization.
If you send that byte array in an HTTP body with e.g. Content-Type: octet-stream, yes — that marks the bytes as opaque, and prevents middleboxes from parsing and manipulating them. But with Content-Type: application/json, it’s a different story — that marks the bytes as representing a JSON object, which means they’re free to be parsed and re-encoded by any middlebox that satisfies the rules laid out by JSON. This is not uncommon, CDNs will sometimes compact JSON as optimizations. And it’s this case I’m mostly speaking about.
I’m not trying to be difficult, or speculating about theoreticals, or looking for any kind of argument. I’m speaking from experience, this is real stuff that actually happens and breaks critical assumptions made by a lot of software.
If you sign a JSON encoding of something, and include the bytes you signed directly alongside the signature as opaque bytes — i.e. explicitly not as a sibling or child object in the JSON message that includes the signature — then no problem at all.
tl;dr: sending signatures with JSON gotta be like
{"sig":"XXX", "msg":"XXX"}
Such CDNs would break Subresource Integrity and etag caching. Compression is a much more powerful optimization than removing a bit of whitespace, so it’s broken and inefficient. Changing the content in any way based on a mimetype is dangerous. If a publisher uses a CDN with such features, they should know to disable them when the integrity of the content matters.
I’m sending all my mails with a digital signature (RFC 4880 and 3156). That signature is not applied to a canonicalized form of the mail apart from having standardized line endings. It’s applied to the bytes. Mail servers should not touch the content other than adding headers.
Dangerous or not, if something says it’s JSON, it’s subject to the rules defined by JSON. A proxy that transforms the payload according to those rules might have to intermediate on lower-level concerns, like Etag (as you mention). But doing so would be perfectly valid.
And it’s not limited to CDNs. If I write a program that sends or receives JSON over HTTP, any third-party middleware I wire into my stack can do the same kind of thing, often without my knowledge.
Yes, sure. But AFAIK there is no concept of a “mail object” that’s analogous to a JSON object, is there?
A digital signature does not apply to JSON. It applies to a bytearray. If an intermediary is in a position to modify the data it transmits and does not pass along a bytearray unchanged, it’s broken for the purpose of passing on data reliably and should not be used.
Canonicalization cannot work sustainably because as soon as it does some new ruleset is thought up by people that enjoy designing puzzles more than creating useful software. Canonicalization has a use when you want to compare documents, but is a liability in the context of digital signatures.
A digital signature is meant to prove that a bytearray was endorsed by an entity with a private key.
If any intermediary mangles the bytearray, the signature becomes useless and the intermediary should be avoided. An algorithm that tries to undo the damage done by broken intermediaries is not the solution. Either the signature matches the bytearray or it does not.
100% agreement.
Again 100% agreement, which supports my point that you can’t sign JSON payloads, because JSON explicitly does not guarantee that any encoded form will be preserved reliably over any transport!
Citation needed. I can read nothing about this in RFC 8259. Perhaps your observation is a fatalist attitude that springs from working with broken software. Once you allow this for JSON, what’s next? Re-encoding JPEGs, adding tracking watermarks to documents? No transport should modify the payload that it is transporting. If it does, it’s broken.
There is no guarantee about the behavior transports in the JSON RFC 8259. There is also no text that allows serialization to change for certain transports.
Yes, sure. If the payloads are tagged as specific things with defined specs, intermediaries are free to modify them in any way that doesn’t violate the spec. This isn’t my speculation, or fatalism, it’s direct real-world experience.
If you want to ensure that your payload bytes aren’t modified, then you need to make sure they’re opaque. If you want to send such bytes in a JSON payload, you need to mark the payload as something other than JSON, or encode those bytes in a JSON string.
You might be missing the core info about why many signed JSON APIs are trash: they include the signature in the same JSON document as the thing they sign:
The signature is calculated for a JSON serialization of a dict with, in this example, the keys username and message, then the signature key is added to the dict. This modified dict is serialised again and sent over the network.
This means that the client doesn’t have the original byte array. It needs to parse the JSON it was given, remove the signature key, and then serialize again in some way that generates exactly the same bytes, and then it can sign those bytes and validate the message.
This is clearly completely bonkers, but several protocols do variations on this, including Matrix, Secure Scuttlebut, and whatever this is https://cyberphone.github.io/doc/security/jsf.html#Sample_Object
The PayPal APIs do the thing you’re thinking of: they generate some bytes (which you can parse to JSON) and provide the signature as a separate value (as an HTTP header, I think).
@peterbourgon’s suggestion also avoids the core issue and additionally protects against middle boxes messing with the bytes (which I agree they shouldn’t do, but they do so 🤷) and makes the easiest way of validating the signature also the correct way.
(If the application developer’s web framework automatically parses JSON then you just know that some of them are going to remove the signature key, reserialise and hash that (I’ve seen several people on GitHub try to do this with the JSON PayPal produces))
The PayPal way is fine, but you then get into the question of how to transmit two values instead of one. You can use HTTP headers or multipart encoding, but now your protocol is tied to HTTP and users need to understand those things as well as JSON. Peter’s suggestion requires users only to understand JSON and some encoding like base64.
A final practical point: webservers sometimes want to consume the request body and throw it away if they can parse it into another format (elixir phoenix does this, for efficiency, they say), so your users may need to provide a custom middleware for your protocol and get it to run before the default JSON middleware, which is likely to be more difficult for them than turning a base64 string back into JSON.
likewise, it really frustrates me. I’m not surprised, just annoyed, because it’s an aspect of things that always gets fixed as an afterthought in cryptography-related standards…
nobody likes ASN.1, especially not the experts in it, but it exists for a reason. text-based serialization formats don’t canonicalize easily and specifying a canonicalization is extra work. even some binary formats, such as protocol buffers, don’t necessarily use a canonical form (varints are the culprit there).
ASN.1 does not help with canonicalization either. It has loads of different wire encodings, e.g. BER, PER. For cryptographic purposes you must use DER, which is BER with extra rules to say which of the many alternative forms in BER must be used, e.g. forbidding encodings of integers with leading zeroes.
yes, that’s fair.
Huge Cosmos SDK vibes.
Signing messages was an entire procedure involving ordering JSON fields alphanumerically, minifying and then signing the hash.
So many hours have been spent because a client, typically not written in Go, would order a field differently, yielding a different hash.
Good times.
Brother, I’ve got some stories. I’ve actually filed a CVE to the Cosmos SDK for a signing-related issue. (Spoiler: closed without action.)
Yup, sounds like an SDK episode to me.
I think I remember seeing your name on a GitHub issue conversation, with the same couple of “adversaries” justifying their actions lol.
I distanced myself from that ecosystem both professionally and hobby-wise because I did not like how the tech stack was implemented, and how the governance behaved.
Although most of the bad decisions have been inherited from a rather… peculiar previous leadership.
A solution that I like for this is base64 encoding the json, and signing the base64 blob.
Which is a roundabout way to agree: don’t sign json.
…but this has the same problem? If you reorder the keys in an object in the JSON, you’re going to get a different base64 string.
No. The point is that you get a different base64 string. It makes it obvious that the message was tampered with.
The problem is that when canonicalizing json, there are multiple json byte sequences that can be validated with a given signature.
A bug in canonicalizing may lead to accepting a message that should not have been accepted. For example, you may have duplicate fields. One json parser may take the first duplicate, one may take the last, and if you canonicalized after parsing and passed the message along, now you can inject malicious values:
You may say “but if you follow the RFC, don’t use the stock json libraries that try to make things convenient, and are really careful, you’re protected”. You’d be right, but it’s a tall order.
With base64, there’s only one message that will validate with a given signature (birthday attacks aside). It’s much harder to get wrong.
Well, not exactly.
{"a":1}
and{ "a": 1 }
are different byte sequences, and equivalent JSON payloads. But the base64 encodings of those payloads are different byte sequences, and different base64 payloads – base64 is bijective. (Or, at least, some versions of base64.)Another way to phrase this is that it makes it hard to shoot yourself in the foot. If you get straight JSON over the wire, what do you do? You need to parse it in order to canonicalize it, but your JSON parser probably doesn’t parse it the way you need it to in order to canonicalize it for verification, so now you have to do a bunch of weird stuff to try and parse it yourself, and maybe serialize a canonicalized version again just for verification, etc.
The advantage of using base64 or something like it (e.g. straight hex encoding as mentioned in your sibling comment) is that it makes it obvious that you should stop pretending that you can reasonably sign a format that can’t be treated as “just a stream of bytes” (because you can’t - a signature over a stream of bytes is the only cryptographic primitive we have, so what you’re actually doing by “canonicalizing JSON” is turning the JSON into a stream of bytes, poorly) and just sign something that is directly and solely a stream of bytes.
Edit: the problem with this is that you’ve now doubled your storage cost. The advantage of signing JSON is that you can deserialize, store that in a database alongside the signature, and reconstruct functionally the same thing if you need to retransmit the original message (for example to sync a room up to a newly-joined Matrix server). If you’re signing base64/hex-encoded blobs, you now need to store the original message that was signed, rather than being able to reconstruct it on-the-fly. But a stream of bits isn’t conducive to e.g. database searches, so you still have to store the deserialized version too. Hence: 2x storage.
Even doing that much I would consider to be a success!
One, It’s rare that a canonical form is even defined, and more rare still that it’s defined in a way that’s actually unambiguous. I’m dubious that Matrix’s canonical JSON spec (linked elsewhere) qualifies.
Two, even if you have those rules, it’s rare that I’ve ever seen code that follows them. Usually a project will assume the straight JSON from the wire is canonical, and sign/verify those wire bytes directly. Or, it might parse the wire bytes into a value, but then it will sign/verify the bytes produced by the language default JSON encoder, assuming those bytes will be canonical.
I don’t understand why a distinction between reordering keys and changing whitespace needs to be made. Are they treated differently in the JSON RFC?
Equivalent according to whom? The JSON RFC doesn’t define equality.
Are you simply saying that defining a canonical key ordering wouldn’t be sufficient since you’d need to define canonical whitespace too? If so, I don’t understand why it contradicts bdesham’s comment, since they just gave a single example of what base64 doesn’t canonicalize.
I didn’t mean to distinguish key order and whitespace. Both are equally and explicitly defined to be arbitrary by the JSON spec.
Let me rephrase:
{a":1,"b":2}
and{"b":2,"a":1}
and{ "a": 1, "b": 2 }
are all different byte sequences, but represent exactly the same JSON object. The RFC specifies JSON object equality to at least this degree — we’ll ignore stuff like IEEE float precision 😉 If you defined a canonical encoding, your parser would reject non-canonical input, which isn’t permitted by the JSON spec, and means you’re no longer speaking JSON.I don’t think so. At least RFC 8259 doesn’t identify any (!) of those terms. (It can’t for at least two reasons: it doesn’t know how to compare strings, and it explicitly says ordering of kv pairs may be exposed as semantically meaningful to consumers.)
JSON is semantically hopeless.
Where? I searched for “order” and didn’t find anything that would imply this conclusion, AFACT.
Here’s what I did find:
and
which to me seems to pretty clearly say that order can’t matter to implementations. Maybe I’m misreading.
JSON is an encoding format that’s human-readable, basically ubiquitous, and more or less able to express what most people need to express. These benefits hugely outweigh the semantic hopelessness you point out, I think.
I think you did misread it, I’m afraid.
Those are the quotes I mean, particularly the latter one:
Left unsaid is that implementations that do depend on or expose member ordering may not be interoperable in that sense. And we know they are still implementations of JSON because of the first sentence there. (“Left unsaid” in that one can infer that anything goes from the first sentence taken with the contrapositive of the second.) Slightly weaselly language like this exists throughout the RFC, including in areas related to string and number comparison. If I understand correctly, while many of those involved wanted to pin down JSON’s semantics somewhat, they could not reach agreement.
You might be right. That “more or less” gives me the heebie-jeebies though, because without semantics, the well-known security and interoperability problems will just keep happening. People never really just use JSON, there’s always some often-unspoken understanding about a semantics for JSON involved. Otherwise they couldn’t communicate at all. (The JSON texts would have to remain uninterpreted blobs.) And where parties differ in the fine detail of that understanding, they will reliably miscommunicate.
I read this as supporting my interpretation, rather than refuting it. I read it as saying that implementations must be interoperable (i.e. produce equivalent outcomes) regardless of ordering.
Totally agreed! And in these cases, implementations have no choice but to treat the full range of possibilities as possibilities, they can’t make narrower assumptions while still remaining compliant with the spec as written.
It’s a tautology. If you don’t depend on the ordering, you won’t be affected by the ordering. It doesn’t anywhere say that an implementation must not depend on the ordering.
The wording is very similar to the wording in sections regarding string comparison, which if I understand you correctly, you believe is an underdefined area. From section 8.3:
Again unsaid: those that don’t may not so agree.
It says that
Meaning, as long as objects keys are unique, two JSON payloads with the same set of name-value mappings must be “interoperable” (i.e. semantically equivalent JSON objects) regardless of key order or whitespace or etc.
No, it says they’ll agree on the name-value mappings. It doesn’t say anything there about whether they can observe or will agree on the ordering - that’s the purpose of the following paragraph, talking about ordering.
Agreeing on name-value mappings is necessarily order-invariant. If this weren’t the case, then the object represented by
{"a":1,"b":2}
wouldn’t be interoperable with (i.e. equivalent to) the object represented by{"b":2,"a":1}
— which is explicitly not the case.Where does it say those objects are equivalent?
I put it to you that the RFC does not equate those objects, but says that JSON implementations that choose certain additíonal constraints - order-independence, a method of comparing strings, a method of comparing numbers - not required by the specification will equate those objects.
The RFC is very carefully written to avoid giving an equivalence relation over objects.
I understand “interoperable” to mean “[semantically] equivalent”.
If this weren’t the case, then JSON would be practically useless, AFAICT.
It’s not so complicated. The JSON payloads
{"a":1,"b":2}
and{"b":2,"a":1}
must be parsed by every valid implementation into JSON objects which are equivalent. I hope (!) this isn’t controversial.Does JavaScript include a valid implementation of JSON? How would we test your assertion above in JavaScript?
My proposal for testing this assertion would be this:
Would you agree that this constitutes a valid test of the assertion?
I’m no Javascript expert, so there may be details or corner cases at play in this specific bit of code. But, to generalize to pseudocode
then yes I’d say this is exactly what I mean.
edit: Yeah, of course JS defines == and === and etc. equality in very narrow terms, so those specific operators would say “false” and therefore wouldn’t apply. I’m referring to semantic equality, which I guess is particularly tricky in JS.
Exactly! Me too. I’m saying that every example of interoperability the spec talks about is couched in terms of “if your implementation chooses to do this, …”, i.e. adherence to the letter of the spec alone isn’t enough to get that interoperability. And the practical uselessness - yes, that’s what I believe. It’s fine when parties explicitly contract into a semantics overlaying the syntax of the RFC but all bets are off in cases of middleboxes, databases, query languages etc as far as the standard is concerned.
This is of course a very sensible position, but it goes beyond the requirements of the RFC.
I read the RFC as very unambiguously requiring the thing that I said, so if we don’t agree on that point, I guess we’ll agree to disagree.
A nitpick - if we wrote an encoding of a map as [[“a”,1],[“b”,2]] and another with the elements swapped I hope we should agree that the two lists contain the same set of name value mappings. Agreeing on the mappings when keys are disjoint (as required by the spec) is a different relation than equivalence of terms (carefully not defined by the spec), is what I’m trying to say.
No, why would they? A name/value mapping clearly describes key: value pairs in an object, e.g.
{"name":"value"}
, nothing else.Maps (objects) are unordered by definition; arrays (lists, etc.) are ordered by definition.
[["a",1],["b",2]]
and[["b",2],["a",1]]
are distinct;{"a":1,"b":2}
and{"b":2,"a":1}
are equivalent.They should be equivalent, on that we agree; but the standard on its own does not establish their equivalence. It explicitly allows for them to be distinguished.
The RFC says that implementations must parse
{"a":1,"b":2}
and{"b":2,"a":1}
to values which are interoperable. Of course implementations can keep the raw bytes and use them to differentiate the one from the other on that basis, but that’s unrelated to interoperability as expressed by the RFC. You know this isn’t really an interesting point to get into the weeds on, so I’ll bow out.edit: that’s from
I wish you’d point me to where in the RFC it says it “must” parse them identically, but fair enough.
Yeah, something like this is necessary, but unfortunately there are multiple base64 encoding schemes 🥲 I like straight up hex encoding for this reason. No ambiguity, and not really that much bigger than base64, especially given that this stuff is almost always going through a gzipped HTTP pipe, anyway.
I’ve done a lot of work in the area of base conversion (for example).
For projects implementing a base 64, we suggest b64ut which is shorthand for RFC 4648 base 64 URI canonical with padding truncated.
Base 64 is ~33% smaller than Hex. That savings was the chief motivating factor for Coze to migrate away from the less efficient Hex to base64. To address the issues with base 64, the stricter b64ut was defined.
Here’s a small Go library that uses b64ut.
Here’s some notes comparing Hex and base 64 and the rational justifying b64ut. and a Github issue concerning non-canonical base 64
A little more on b64utb64ut (RFC 4648 base 64 URI canonical with padding truncated) is:
ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/
andABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/
. b64ut uses the safe alphabet.2.1. On a tangent, the RFC’s alphabets are “out of order”. A more natural order, from a number perspective but also an ASCII perspective, is to start with 0, so e.g.
0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz
would have been a more natural alphabet, regardless, one of the two RFC’s alphabet is employed by b64ut. I use the more natural alphabet for all my bases when not using RFC base 64. 3. b64ut does not use padding characters, but since the encoding method adds padding, they are subsequently “truncated”.4. b64ut uses canonical encoding. There is only a single valid canonical encoding and decoding, and they align. For example, non-canonical systems may interpret
hOk
andhOl
as the same value. Canonical decoding errors on the non-canonical encoding.There multiple RFC 4648 encoding schemes, and RFC 4648 only uses a single conversion method that we’ve termed a “bucket conversion” method. There is also the natural base conversion, which is produced by the “iterative divide by radix” method. Thankfully, natural and bucket conversion align when “buckets” (another technical term) are full and alphabets are in order. Otherwise, it does not align and encodings are mismatched.
I made a tool to play play with natural base conversions and the RFC is avaiable under the “extras” tab.
https://convert.zamicol.com
Here’s an example converting a binary string to a non-RFC 4648 base 64: https://convert.zamicol.com/#?inAlph=01&in=10111010100010111010&outAlph=0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz!%2523
To my eyes, the two alphabets in point 2 in your comment look identical. What am I missing?
You’re right!
1:
ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_
2:
ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/
1 being URI safe and 2 URI unsafe.
And what’s the difference when those strings are gzipped (as effectively every such string will be)?
Gzip isn’t always available and when it is it requires extra processing. Yes, I’d try to use gzip when available.
JSON is marshaled into UTF-8 which is easily signed or verified.
Again, UTF-8 doesn’t guarantee what you’re suggesting, here. UTF-8 guarantees properties of individual runes (characters), not anything about the specific order of runes in a string.
UTF-8 by definition is a series of ordered bytes.
Yes, for individual characters (runes). And a UTF-8 string is a sequence of zero or more valid UTF-8 characters (runes). But the order of those runes in a string not relevant to the UTF-8 validity of that string.
Validity is tangential; the point is order, and UTF-8 is a series of ordered bytes.
I believe the following abstraction layer diagram fairly characterizes your view:
The jump from UTF-8 to JSON is where some order information may be considered to be lost in the narrow scope of object keys, while acknowledging all the rest of JSON is still ordered, including the explicitly ordered arrays.
Order information is present and is passed along this abstraction chain. Order information can only be considered absent after the UTF-8 abstraction layer. At the UTF-8 layer, all relevant order information is fully present.
This isn’t true in the sense that you mean. UTF-8 is an encoding format that guarantees a valid series of ordered bytes for individual characters (i.e. runes) — it doesn’t guarantee anything about the order of valid runes in a sequence of valid runes (i.e. a string).
Within each individual character (rune), yes. Across multiple characters (runes) that form the string, no. That a string is UTF-8 provides guarantees about individual elements of that string only, it doesn’t provide any guarantee about the string as a whole, beyond that each element of the string is a valid UTF-8 character (rune).
Sending JSON payload bytes
{"a":1}
” does not guarantee the receiver will receive bytes{"a":1}
exactly, they can just as well receive{ "a": 1 }
and the receiver must treat those payloads the same.edit: This sub-thread is a great example of what I meant in my OP, for the record 😞
UTF-8 is a series of ordered bytes. UTF-8 contains order information by definition.
That is the point: Order is present for UTF-8. Only after UTF-8 can order information finally start to be subtracted. Omitting order information at the UTF-8 abstraction layer is against UTF-8’s specification and is simply not permitted. Order information can only be subtracted after UTF-8.
JSON, by specification, marshals to and from UTF-8. In the very least, we have to acknowledge order information is available at the UTF-8 layer even if it is subtracted for JSON objects.
You keep repeating this, but it isn’t true in the sense that you mean.
See
UTF-8 is an encoding for individual characters (runes). It defines a set of valid byte sequences for valid runes, and contains order information for the bytes comprising those valid runes. It does not define or guarantee or assert any kind of order information for strings, except insofar as a UTF-8 string is comprised of valid UTF-8 runes.
That JSON marshals to a UTF-8 encoded byte sequence does not mean that UTF-8 somehow enforces the order of all of the bytes in that byte sequence. Bytes in individual runes, yes; all the bytes in the complete byte sequence, no.
I’m not sure what this means. UTF-8 asserts “order information” at the level of individual runes, not complete strings.
UTF-8 does not provide any order information which is relevant to JSON payloads, except insofar that JSON payloads can reliably assume their keys and values are valid UTF-8 byte sequences.
If UTF-8 was not ordered, the letters in this sentence would be out of order as this sentence itself is encoded in UTF-8.
UTF-8 by definition is ordered. This is a fundamental aspect of UTF-8. There’s nothing simpler that can be said because fundamental properties are the simplest bits of truth: UTF-8 is ordered. UTF-8 strings are a series of ordered bytes.
UTF-8 is a string. Order is significant for all strings. All strings are a series of ordered bytes.
Yes, it has order information.
JSON inherits order, especially arrays, from the previous abstraction layer, in this case, UTF-8. If this were not the case, how is order information known to JSON arrays, which are ordered? Where is the order information inherited from if not from the previous abstraction layer?
Edit:
That is incorrect. UTF-8 by definition is a series of ordered bytes, which is the definition of a string. UTF-8 already exists in that paradigm. It does not need to further confine a property it already inherits. UTF-8 is a string encoding format.
https://en.wikipedia.org/wiki/UTF-8
—
The order of JSON arrays is part of the JSON specification. It’s completely unrelated to how JSON objects are marshaled to bytes, whether that’s in UTF-8 or any other encoding format.
Is the order of fields in a CSV file “inherited from” the encoding of that file?
—
At this point I’m not sure how to respond in a way that will be productive. Apologies, and good luck.
Is in the context of strings. JSON doesn’t define UTF-8 as it’s encoding format for a single character. JSON defines UTF-8 as the character encoding format for strings. Strings are ordered. The entirety of UTF-8 is defined in the context of string encoding.
When parsing a JSON array, where is the array’s order information known from? Of course, the source string contains the order. JSON parsers must store this order information for array as required by the spec. JSON inherits order from the incoming string.
JSON defines arrays as ordered, and objects as unordered. The specific order of array elements in a JSON payload is meaningful (per the spec) and is guaranteed to be preserved, but the specific order of object keys is not meaningful and is not guaranteed to be preserved.
When JSON is unmarshalled from a string, where does an array’s order information come from? Does it come from the incoming string?
Yes, it does. But the important detail here is that JSON arrays have an ordering, whereas JSON maps don’t have an ordering. So when you encode (or transcode) a JSON payload, you have to preserve the order of values in arrays, but you don’t have to preserve the order of keys in objects.
If you unmarshal the JSON payload
{"a":[1,2]}
to some value x, and the JSON payload{"a":[2,1]}
to some value y of the same type, then x != y. But if you unmarshal the JSON payload{"a":1,"b":2}
to some value x, and the JSON payload{"b":2,"a":1}
to some value y of the same type, then x == y.Coze models the Pay field as a json.RawMessage, which is just the raw bytes as received. It also produces hashes over those bytes directly. But that means different pay object key order produces different hashes, which means key order impacts equivalence, which is no bueno.
You can’t have it both ways. You can’t argue for JSON being both the pure abstract form and also a concrete string. JSON is not a string, JSON is an abstraction that’s serialized into a string; I agree with that. The abstract JSON is parsed from a concrete string, and strings carry order information. Obviously JSON is inheriting order from the abstraction layer above, which in this case is string (ITF-8). The order is there as shown arrays being ordered.
When JSON is parsed from UTF-8, it is now in an abstract JSON form. When it’s serialized into UTF-8, it’s not the abstract JSON, it is now a string. It’s not both. I don’t see any issue categorizing JSON as a pure abstraction, however, the abstraction is solidified when serialized.
JOSE, Matrix, Coze, PASETO all use UTF-8 ordering, and not only does it work well, but it is idiomatic.
These tools do not verify or sign JSON, it signs and verifies strings, a critical distinction. After that processing, it may then be interpreted into JSON. These tools are a logical layer around JSON, and the JSON these tools processes, is JSON. In the example of Coze, not all JSON is Coze, but all Coze is JSON. That’s a logical hierarchy without hint of logical conflict. As I like to say, that makes too much sense.
I fully acknowledge your “JSON objects are unordered” standpoint, but after all this time I have no hesitation saying it’s without merit. Even if that’s were the case, in that viewpoint these tools are not signing JSON, they’re signing strings. All cryptographic primitives sign strings, not abstract unserialized formats. And that too is no problem, far better, JSON defines the exact serialization format. That’s the idiomatic bridge permitting signing. It’s logical, idiomatic, ergonomic, it works, but most of all, it’s pragmatic.
If JSON said in it’s spec, “JSON is an abstract data format that prohibits serialization” this would be a problem. But what use would be such a tool? If JSON said, “JSON objects are unordered and the JSON spec prohibits any order information being transmitted in its serialized form” that too would be a problem, but why would it ever have such a silly prohibition? To say, “can’t sign JSON because it’s unordered” is exactly that silly prohibition.
My understanding of your position is: if user A serializes a JSON object to a specific sequence of (let’s say UTF-8 encoded) bytes (or, as you say, a string) and sends those bytes to user B, then — no matter how they are sent — the bytes that are received by B can be safely assumed to be identical to the bytes that were sent by A.
Is that accurate?
–
This assumption is true most of the time, but it’s not true always. How the bytes are sent is relevant. Bytes are not just bytes, they’re interpreted at every step along the way, based on one thing or another.
If JSON serialized bytes are sent via a ZeroMQ connection without annotation, or over raw TCP, or whatever, then sure, it’s reasonable to assume they are opaque and won’t be modified.
But if they’re sent as the body of an HTTP request with a Content-Type of application/json, then those bytes are no longer opaque, they are explicitly designated as JSON, and that changes the rules. Any intermediary is free to transform those bytes in any way which doesn’t violate the JSON spec and results in a payload which represents an equivalent abstract JSON object.
These transformations are perfectly valid and acceptable and common, and they’re effectively impossible to detect or prevent by either the sender or the receiver.
–
The JSON form defined by JOSE represents signed/verifiable payloads as base64 encoded strings in the JSON object, not as JSON objects directly. This is a valid approach which I’m advocating for.
Matrix says
Which means signatures are not made (or verified) over the raw JSON bytes produced by a stdlib encoder or received from the wire. Instead, those raw wire bytes are parsed into an abstract JSON object, that object is serialized via the canonical encoding by every signer/verifier, and those canonical serialized bytes are signed/verified. That’s another valid approach that I’m advocating for.
The problem is when you treat the raw bytes from the wire as canonical, and sign/verify them directly. That isn’t valid, because those bytes are not stable.
Coze speaks to Coze. Coze is JSON, JSON is not necessarily Coze. Coze is a superset, not a subset. Coze explicitly says that if a JSON parser ignores Coze, and does an Coze invalid transformation, that coze may be invalid.
This is true for JOSE, Matrix, Coze, PASETO
https://i.imgur.com/JYS7SFI.png
Incorrect. There’s no logical difference between encoding to UTF-8 or base 64.
This exactly is the mismatch. Since “JSON objects don’t define order” any JWT implementation may serialize payloads into any order. Base 64 isn’t a magic fix for this.
Of course, all implementations serialize into an order. That’s what serialization does by definition. And it doesn’t matter what the serialization encoding is, by definition, any serialization performs exactly this operation.
It’s so obvious, so foundational, so implicitly taken from granted, that fact is being overlooked.
Regarding signing JSON, Peter and I have had a discussion going since March of this year.
I think it’s fair to say of Peter’s position is that he’s concerned about signing JSON.
Our position is signing JSON is not problematic at all. We sign JSON (Coze) without incident using simple canonicalization, which is straightforward and easy to implement (Go implementation and Javascript implementation.
Do you have a recommendation for a (relatively) painless serialization format that is bijective without having to jump through too many hoops?
Doesn’t CBOR provide this, as mentioned in this comment by @peterbourgon ?
https://lobste.rs/s/wvi9xw/why_not_matrix#c_eh9ogd
Yeah, it’s probably as good as it gets. I guess I still need to sort maps manually, and be careful which types I use, in order to get the same output for equivalent input data, but I might be misremembering things. I’ll have another look at the details, I remember that dag-cbor was pretty close to what I needed when I looked last time, but it only allows a very limited set of types.
It’s really hard! Bijectivity itself is easy, just take the in-memory representation of a value, dump the bytes to a hex string, and Bob’s your uncle. But that assumes two things (at least) which probably aren’t gonna fly.
First, that in-memory representation is probably only useful in the language you produced it from — and maybe even the specific version of that language you were using at the time. That makes it impractical to do any kind of SDK in any other language.
Second, if you extend or refactor your type in any way, backwards compatibility (newer versions can use older values) requires an adapter for that original type. Annoying, but feasible. But forwards compatibility (older versions can use newer values) is only possible if you plan for it from the beginning.
There are plenty of serialization formats which solve these problems: Thrift, Protobuf, Avro, even JSON (if you squint), many others. But throw in bijective as another requirement, and I think CBOR is the only one that comes to mind. I would love to learn about some others, if anyone knows of some!
But it’s a properly hard problem. So hard, in fact, that any security-sensitive projects worth its salt will solve it by not having it in the first place. If you produce the signed (msg) bytes with a stable and deterministic encoder, and — critically — you send those bytes directly alongside the signature (sig) bytes as values in your messages, then there’s no ambiguity about which bytes have been signed, or which bytes need to be verified. Which means you can use whatever encoder you want for the messages themselves — JSON can re-order fields, insert or remove whitespace between elements, etc., but it can’t change the value of a (properly-encoded) string. And because you don’t need to decode the msg bytes in order to verify the sig, you don’t need full bijectivity, in either encoder.
https://preserves.dev/ (Disclaimer: it’s something I started)
Thanks!, This looks quite interesting! I’ll have a play with the Rust bindings and see what it can do. I haven’t looked in detail yet, but it looks like it plugs into serde, so it should be easy and cheap to try it out.
I consider Coze’s approach as simple.
We sign JSON and it works just fine.
Coze uses strict base 64 encoding and canonicalization. That’s all that’s needed to make JSON and signing work.
In Coze, the canonical form is generated by three steps:
That’s it.
JSON + Canonicalization allows signing/verification. Canonicalization is the key.
Looks like the beginning of the end of the fantastic progress in tech that’s resulted from a relative lack of regulation.
Also, probably, a massive spike in grift jobs as people are hired to ensure compliance.
Looks like the beginning of the end for the unnecessarey e-waste provoked by companies forcing obselence and anti-consumer patterns made possible by the lack of regulations.
It’s amazing that no matter how good the news is about a regulation you’ll always be able to find someone to complain about how it harms some hypothetical innovation.
Sure. Possibly that too - although I’d be mildly surprised if the legislation actually delivers the intended upside, as opposed to just delivering unintended consequences.
And just to be clear: the unintended consequences here include the retardation of an industry that’s delivered us progress from 8 bit micros with 64KiB RAM to pervasive Internet and pocket supercomputers in one generation.
Edited to add: I run a refurbished W540 with Linux Mint as a “gaming” laptop, a refurbished T470s with FreeBSD as my daily driver, a refurbished Pixel 3 with Lineage as my phone, and a PineTime and Pine Buds Pro. I really do grok the issues with the industry around planned obsolescence, waste, and consumer hostility.
I just still don’t think the cost of regulation is worth it.
I’m a EU citizen, and I see this argument made every single time the EU passes a new legislation affecting tech. So far, those worries never materialized.
I just can’t see why having removeable batteries would hinder innovation. Each company will still want to sell their prducts, so they will be pressed to find creative ways to have a sleek design while meeting regulations.
Do you think Apple engineers are not capeable of designing AirPods that have a removeable battery? The battery is even in the stem, so it could be as simple as having the stem be de-tacheable. It was just simpler to super-glue everything shut, plus it comes with the benefit of forcing consumers to upgrade once their AirPods have unusable battery life.
Also, if I’m not mistaken it is about service-time replaceable battery, not “drop-on-the-floor-and-your-phone-is-in-6-parts” replaceable as in the old times.
In the specific case of batteries, yep, you’re right. The legislation actually carves special exception for batteries that’s even more manufacturer-friendly than other requirements – you can make devices with batteries that can only be replaced in a workshop environment or by a person with basic repair training, or even restrict access to batteries to authorised partners. But you have to meet some battery quality criteria and a plausible commercial reason for restricting battery replacement or access to batteries (e.g. an IP42 or, respectively, IP67 rating).
Yes, I know, what about the extra regulatory burden: said battery quality criteria are just industry-standard rating methods (remaining capacity after 500 and 1,000 cycles) which battery suppliers already provide, so manufacturers that currently apply the CE rating don’t actually need to do anything new to be compliant. In fact the vast majority of devices on the EU market are already compliant, if anyone isn’t they really got tricked by whoever’s selling them the batteries.
The only additional requirements set in place is that fasteners have to be resupplied or reusable. Most fasteners that also perform electrical functions are inherently reusable (on account of being metallic) so in practice that just means, if your batteries are fastened with adhesive, you have to provide that (or a compatible) adhesive for the prescribed duration. As long as you keep making devices with adhesive-fastened batteries that’s basically free.
i.e. none of this requires any innovation of any kind – in fact the vast majority of companies active on the EU market can keep on doing exactly what they’re doing now modulo exclusive supply contracts (which they can actually keep if they want to, but then they have to provide the parts to authorised repair partners).
Man do I ever miss those days though. Device not powering off the way I’m telling it to? Can’t figure out how to get this alarm app to stop making noise in this crowded room? Fine - rip the battery cover off and forcibly end the noise. 100% success rate.
You’re enjoying those ubiquitous “This site uses cookies” pop-ups, then?
Of course they’re capable, but there are always trade-offs. I am very skeptical that something as tiny and densely packed as an AirPod could be made with removeable parts without becoming a lot less durable or reliable, and/or more expensive. Do you have the hardware/manufacturing expertise to back up your assumptions?
I don’t know where the battery is in an AirPod, but I do know that lithium-polymer batteries can be molded into arbitrary shapes and are often designed to fill the space around the other components, which tends to make them difficult or impossible to remove.
Those aren’t required by law; those happen when a company makes customer-hostile decisions and wants to deflect the blame to the EU for forcing them to be transparent about their bad decisions.
Huh? Using cookies is “user-hostile”? I mean, I actually remember using the web before cookies were a thing, and that was pretty user-unfriendly: all state had to be kept in the URL, and if you hit the Back button it reversed any state, like what you had in your shopping cart.
That kind of cookie requires no popup though, only the ones used to shared info with third parties or collect unwarranted information.
I can’t believe so many years later people still believe the cookie law applies to all cookies.
Please educate yourself: the law explicitly applies only to cookies used for tracking and marketing purposes, not for funcional purposes.
The law also specified that the banner must have a single button to “reject all cookies”, so any website that ask you to go trought a complex flow to reject your consent is not compliant.
It requires consent for all but “strictly necessary” cookies. According to the definitions on that page, that covers a lot more than tracking and marketing. For example “ choices you have made in the past, like what language you prefer”, or “statistics cookies” whose “ sole purpose is to improve website function”. Definitely overreach.
we do know it and it’s a Li-Ion button cell https://guide-images.cdn.ifixit.com/igi/QG4Cd6cMiYVcMxiE.large
FWIW this regulation doesn’t apply to the Airpods. But if for some reason it ever did, and based on the teardown here, the main obstacle for compliance is that the battery is behind a membrane that would need to be destroyed. A replaceable fastener that would allow it to be vertically extracted, for example, would allow for cheap compliance. If Apple got their shit together and got a waterproof rating, I think they could actually claim compliance without doing anything else – it looks like the battery is already replaceable in a workshop environment (someone’s done it here) and you can still do that.
(But do note that I’m basing this off pictures, I never had a pair of AirPods – frankly I never understood their appeal)
Sure, Apple is capable of doing it. And unlike my PinePhone the result would be a working phone ;)
But the issue isn’t a technical one. It’s the costs involved in finding those creative ways, to hiring people to ensure compliance, and especially to new entrants to the field.
It’s demonstrably untrue that the costs never materialise. Speak to business owners about the cost of regulatory compliance sometime. Red tape is expensive.
What is the alternative?
Those companies are clearly engaging in anti-consumer behavior, actively trying to stop right to repair and more.
The industry demonstrated to be incapable of self-regulating, so I think it’s about time to force their hand.
This law can be read in its entirety in a few minutes, it’s reasonable and to the point.
Is that a trick question? The alternative is not regulating, and it’s delivered absolutely stunning results so far. Again: airgapped 8 bit desk toys to pocket supercomputers with pervasive Internet in a generation.
Edited to add: and this isn’t a new problem they’re dealing with; Apple has been pulling various customer-hostile shit moves since Jobs’ influence outgrew Woz’s:
(from https://www.folklore.org/StoryView.py?project=Macintosh&story=Diagnostic_Port.txt )
Edited to add, again: I mean this without snark, coming from a country (Australia) that despite its larrikin reuptation is astoundingly fond of red tape, regulation, conformity, and conservatism. But I think there’s a reason Silicon Valley is in America, and not either Europe or Australasia, and it’s cultural as much as it’s economic.
Did a standard electric plug also stiffle innovation? Or mandates about a car having to fit on a lane?
Laws are the most important safety lines we have, otherwise companies would just optimize for profit in malicious ways.
The reason is literally buckets and buckets of money from defense spending. You should already know this.
It’s not just that. Lots of people have studied this and one of the key reasons is that the USA has a large set of people with disposable income that all speaks the same language. There was a huge amount of tech innovation in the UK in the ’80s and ’90s (contemporaries of Apple, Microsoft, and so on) but very few companies made it to international success because their US competitors could sell to a market (at least) five times the size before they needed to deal with export rules or localisation. Most of these companies either went under because US companies had larger economies of scale or were bought by US companies.
The EU has a larger middle class than the USA now, I believe, but they speak over a dozen languages and expect products to be translated into their own locales. A French company doesn’t have to deal with export regulations to sell in Germany, but they do need to make sure that they translate everything (including things like changing decimal separators). And then, if they want to sell in Spain, they need to do all of that again. This might change in the next decade, since LLM-driven machine translation is starting to be actually usable (helped for the EU by the fact that the EU Parliament proceedings are professionally translated into all member states’ languages, giving a fantastic training corpus).
The thing that should worry American Exceptionalists is that the middle class in China is now about as large as the population of America and they all read the same language. A Chinese company has a much bigger advantage than a US company in this regard. They can sell to at least twice as many people with disposable income without dealing with export rules or localisation than a US company.
That’s one of the reasons but it’s clearly not sufficient. Other countries have spent up on taxpayer’s purse and not spawned a silicon valley of their own.
“Spent up”? At anything near the level of the USA??
Yeah.
https://en.m.wikipedia.org/wiki/History_of_computing_in_the_Soviet_Union
But they failed basically because of the Economic Calculation Problem - even with good funding and smart people, they couldn’t manufacture worth a damn.
https://en.m.wikipedia.org/wiki/Economic_calculation_problem
Money - wherever it comes from - is an obvious prerequisite. But it’s not sufficient - you need a (somewhat at least) free economy and a consequently functional manufacturing capacity. And a culture that rewards, not kills or jails, intellectual independence.
But they did spawn a Silicon Valley of their own:
https://en.wikipedia.org/wiki/Zelenograd
The Wikipedia cites a number of factors:
Government spending tends to help with these kind of things. As it did for the foundations of the Internet itself. Attributing most of the progress we had so far to lack of regulation is… unwarranted at best.
Besides, it’s not like anyone is advocating we go back in time and regulate the industry to prevent current problems without current insight. We have specific problems now that we could easily regulate without imposing too much a cost on manufacturers: there’s a battery? It must be replaceable by the end user. Device pairing prevents third party repairs? Just ban it. Or maybe keep it, but provide the tools to re-pair any new component. They’re using proprietary connectors? Consider standardising it all to USB-C or similar. It’s a game of whack-a-mole, but at least this way we don’t over-regulate.
Beware comrade, folks will come here to make a slippery slope arguments about how requiring battery replacements & other minor guard rails towards consumer-forward, e-waste-reducing design will lead to the regulation of everything & fully stifle all technological progress.
What I’d be more concerned is how those cabals weaponize the legislation in their favor by setting and/or creating the standards. I look at how the EU is saying all these chat apps need to quit that proprietary, non-cross-chatter behavior. Instead of reverting their code to the XMPP of yore, which is controlled by a third-party committee/community, that many of their chats were were designed after, they want to create a new standard together & will likely find a way to hit the minimum legal requirements while still keeping a majority of their service within the garden or only allow other big corporate players to adapt/use their protocol with a 2000-page specification with bugs, inconsistencies, & unspecified behavior.
Whack enough moles and over-regulation is exactly what you get - a smothering weight of decades of incremental regulation that no-one fully comprehends.
One of the reason the tech industry can move as fast as it does is that it hasn’t yet had the time to accumulate this - or the endless procession of grifting consultants and unions that burden other industries.
It isn’t exactly what you get. You’re not here complaining about the fact that your mobile phone electrocutes you or gives you RF burns of stops your TV reception - because you don’t realise that there is already lots of regulation from which you benefit. This is just a bit more, not the straw-man binary you’re making out it to be.
I am curious however: do you see the current situation as tenable? You mention above that there are anti-consumerist practices and the like, but also express concern that regulation will quickly slippery slope away, but I am curious if you think the current system where there is more and more lock in both on the web and in devices can be pried back from those parties?
Why are those results stunning? Is there any reason to think that those improvements were difficult in the first place?
There are a lot of economic incentives, and it was a new field of science application, that has benefited from so many other fields exploding at the same time.
It’s definitely not enough to attribute those results to the lack of regulation. The “utility function” might have just been especially ripe for optimization in that specific local area, with or without regulations.
Now, we see monopolies appearing again and associated anti-consumer decisions to the benefit of the bigger players. This situation is well-known – tragedy of the common situations in markets is never fixed by the players themselves.
Your alternative of not doing anything hinges on the hope that your ideologically biased opinion won’t clash with reality. It’s naive to believe corporations not to attempt to maximize their profits when they have an opportunity.
Well, I guess I am wrong then, but I prefer slower progress, slower computers, and generating less waste than just letting companies do all they want.
This did not happen without regulation. The FCC exists for instance. All of the actual technological development was funded by the government, if not conducted directly by government agencies.
As a customer, I react to this by never voluntarily buying Apple products. And I did buy a Framework laptop when it first became available, which I still use. Regulations that help entrench Apple and make it harder for new companies like Framework to get started are bad for me and what I care about with consumer technology (note that Framework started in the US, rather than the EU, and that in general Europeans immigrate to the US to start technology companies rather than Americans immigrating to the EU to do the same).
Which is reasonable. Earlier albertorestifo spoke about legislation “forc[ing] their hand” which is a fair summary - it’s the use of force instead of voluntary association.
(Although I’d argue that anti-circumvention laws, etc. prescribing what owners can’t do with their devices is equally wrong, and should also not be a thing).
The problem with voluntary association is that most people don’t know what they’re associating with when they buy a new product. Or they think short term, only to cry later when repairing their device is more expensive than buying a new one.
There’s a similar tension at play with GitHub’s rollout of mandatory 2FA: it really annoys me, adding TOTP didn’t improve my security by one iota (I already use KeepassXC), but many people do use insecure passwords, and you can’t tell by looking at their code. (In this analogy GitHub plays the role of the regulator.)
I mean, you’re not wrong. But don’t you feel like the solution isn’t to infantilise people by treating them like they’re incapable of knowing?
For what it’s worth I fully support legislation enforcing “plain $LANGUAGE” contracts. Fraud is a species of violence; people should understand what they’re signing.
But by the same token, if people don’t care to research the repair costs of their devices before buying them … why is that a problem that requires legislation?
They’re not, if we give them access to the information, and there are alternatives. If all the major phone manufacturers produce locked down phones with impossible to swap components (pairing), that are supported only for 1 year, what’s people to do? If people have no idea how secure the authentication of someone is on GitHub, how can they make an informed decision about security?
When important stuff like that is prominently displayed on the package, it does influence purchase decisions. So people do care. But more importantly, a bad score on that front makes manufacturers look bad enough that they would quickly change course and sell stuff that’s easier to repair, effectively giving people more choice. So yeah, a bit of legislation is warranted in my opinion.
I’m not a business owner in this field but I did work at the engineering (and then product management, for my sins) end of it for years. I can tell you that, at least back in 2016, when I last did any kind of electronics design:
Execs will throw their hands in the air and declare anything super-expensive, especially if it requires them to put managers to work. They aren’t always wrong but in this particular case IMHO they are. The additional design-time costs this bill imposes are trivial, and at least some of them can be offset by costs you save elsewhere on the manufacturing chain. Also, well-ran marketing and logistics departments can turn many of its extra requirements into real opportunities.
I don’t want any of these things more than I want improved waterproofing. Why should every EU citizen that has the same priorities I do not be able to buy a the device they want?
Then I have some very good news for you!
The law doesn’t prohibit waterproof devices. In fact, it makes clear expections for such cases. It mandates that the battery must be repleaceable without specialized tools and by any competent shop, it doesn’t mandate a user-replaceable battery.
I don’t want to defend the bill (I’m skeptical of politicians making decisions on… just about anything, given how they operate) but I don’t think recourse to history is entirely justified in this case.
For one thing, good repairability and support for most of (if not throughout) a device’s useful lifetime was the norm for a good part of that period, and it wasn’t a hardware-only deal. Windows 3.1 was supported until 2001, almost twice longer than the bill demands. NT 3.1 was supported for seven years, and Windows 95 was supported for 6. IRIX versions were supported for 5 (or 7?) years, IIRC.
For another, the current state of affairs is the exact opposite of what deregulation was supposed to achieve, so I find it equally indefensible on (de)regulatory grounds alone. Manufacturers are increasingly convincing users to upgrade not by delivering better and more capable products, but by making them both less durable and harder to repair, and by restricting access to security updates. Instead of allowing businesses to focus on their customers’ needs rather than state-mandated demands, it’s allowing businesses to compensate their inability to meet customer expectations (in terms of device lifetime and justified update threshold) by delivering worse designs.
I’m not against that on principle but I’m also not a fan of footing the bill for all the extra waste collection effort and all the health hazards that generates. Private companies should be more than well aware that there’s no such thing as a free lunch.
Only for a small minority of popular, successful, products. Buying an “orphan” was a very real concern for many years during the microcomputer revolution, and almost every time there were “seismic shifts” in the industry.
Deregulation is the “ground state”.
It’s not supposed to achieve anything, in particular - it just represents the state of minimal initiation of force. Companies can’t force customers to not upgrade / repair / tinker with their devices; and customers can’t force companies to design or support their devices in ways they don’t want to.
Conveniently, it fosters an environment of rapid growth in wealth, capability, and efficiency. Because when companies do what you’re suggesting - nerfing their products to drive revenue - customers go elsewhere.
Which is why you’ll see the greatest proponents of regulation are the companies themselves, these days. Anti-circumvention laws, censorship laws that are only workable by large companies, Government-mandated software (e.g. Korean banking, Android and iOS only identity apps in Australia) and so forth are regulation aimed against customers.
So there’s a part of me that thinks companies are reaping what they sowed, here. But two wrongs don’t make a right; the correct answer is to deregulate both ends.
Maybe. Most early home computers were expensive. People expected them to last a long time. In the late ’80s, most of the computers that friends of mine owned were several years old and lasted for years. The BBC Model B was introduced in 1981 and was still being sold in the early ‘90s. Schools were gradually phasing them out. Things like the Commodore 64 of Sinclair Spectrum had similar longevity. There were outliers but most of them were from companies that went out of business and so wouldn’t be affected by this kind of regulation.
That’s not really true. It assumes a balance of power that is exactly equal between companies and consumers.
Companies force people to upgrade by tying in services to the device and then dropping support in the services for older products. No one buys a phone because they want a shiny bit of plastic with a thinking rock inside, they buy a phone to be able to run programs that accomplish specific things. If you can’t safely connect the device to the Internet and it won’t run the latest apps (which are required to connect to specific services) because the OS is out of date, then they need to upgrade the OS. If they can’t upgrade the OS because the vendor doesn’t provide an upgrade and no one else can because they have locked down the bootloader (and / or not documented any of the device interfaces), then consumers have no choice to upgrade.
Only if there’s another option. Apple controls their app store and so gets a 30% cut of app revenue. This gives them some incentive to support old devices, because they can still make money from them, but they will look carefully at the inflection point where they make more money from upgrades than from sales to older devices. For other vendors, Google makes money from the app store and they don’t[1] and so once a handset has shipped, the vendor has made as much money as they possibly can. If a vendor makes a phone that gets updates longer, then it will cost more. Customers don’t see that at point of sale, so they don’t buy it. I haven’t read the final version of this law, one of the drafts required labelling the support lifetime (which research has shown will have a big impact - it has a surprisingly large impact on purchasing decisions). By moving the baseline up for everyone, companies don’t lose out by being the one vendor to try to do better.
Economists have studied this kind of market failure for a long time and no one who actually does research in economics (i.e. making predictions and trying to falsify them, not going on talk shows) has seriously proposed deregulation as the solution for decades.
Economies are complex systems. Even Adam Smith didn’t think that a model with a complete lack of regulation would lead to the best outcomes.
[1] Some years ago, the Android security team was complaining about the difficulties of support across vendors. I suggested that Google could fix the incentives in their ecosystem by providing a 5% cut of all app sales to the handset maker, conditional on the phone running the latest version of Android. They didn’t want to do that because Google maximising revenue is more important than security for users.
That is remarkably untrue. At least one entire school of economics proposes exactly that.
In fact, they dismiss the entire concept of market failure, because markets exist to provide pricing and a means of exchange, nothing more.
“Market failure” just means “the market isn’t producing the prices I want”.
Is the school of economics you’re talking about actual experimenters, or are they arm-chair philosophers? I trust they propose what you say they propose, but what actual evidence do they have?
I might sound like I’m dismissing an entire scientific discipline, but economics have shown strong signs of being extremely problematic on this front for a long time. One big red flag for instance is the existence of such long lived “schools”, which are a sign of dogma more than they’re a sign of sincere inquiry.
Assuming there’s no major misunderstanding, there’s another red flag right there: markets have a purpose now? Describing what markets do is one thing, but ascribing purpose to them presupposes some sentient entity put them there with intent. Which may very well be true, but then I would ask a historian, not an economist.
Now looking at the actual purpose… the second people exchange stuff for a price, there’s a pricing and a means of exchange. Those are the conditions for a market. Turning it around and making them the “purpose” of market is cheating: in effect, this is saying markets can’t fail by definition, which is quite unhelpful.
This is why I specifically said practicing economists who make predictions. If you actually talk to people who do research in this area, you’ll find that they’re a very evidence-driven social science. The people at the top of the field are making falsifiable predictions based on models and refining their models when they’re wrong.
Economics is intrinsically linked to politics and philosophy. Economic models are like any other model: they predict what will happen if you change nothing or change something, so that you can see whether that fits with your desired outcomes. This is why it’s so often linked to politics and philosophy: Philosophy and politics define policy goals, economics lets you reason about whether particular actions (or inactions) will help you reach those goals. Mechanics is linked to engineering in the same way. Mechanics tells you whether a set of materials arranged in a particular way will be stable, engineering says ‘okay, we want to build a bridge’ and then uses models from mechanics to determine whether the bridge will fall down. In both cases, measurement errors or invalid assumptions can result in the goals not being met when the models say that they should be and in both cases these lead to refinements of the models.
To people working in the field, the schools are just shorthand ways of describing a set of tools that you can use in various contexts.
Unfortunately, most of the time you hear about economics, it’s not from economists, it’s from people who play economists on TV. The likes of the Cato and Mises institutes in the article, for example, work exactly the wrong way around: they decide what policies they want to see applied and then try to tweak their models to justify those policies, rather than looking at what goals they want to see achieved and using the models to work out what policies will achieve those goals.
I really would recommend talking to economists, they tend to be very interesting people. And they hate the TV economists with a passion that I’ve rarely seen anywhere else.
Markets absolutely have a purpose. It is always a policy decision whether to allow a market to exist. Markets are a tool that you can use to optimise production to meet demand in various ways. You can avoid markets entirely in a planned economy (but please don’t, the Great Leap Forward or the early days of the USSR give you a good idea of how many people will die if you do). Something that starts as a market can end up not functioning as a market if there’s a significant power imbalance between producers and consumers.
Markets are one of the most effective tools that we have for optimising production for requirements. Precisely what they will optimise for depends a lot on the shape of the market and that’s something that you can control with regulation. The EU labelling rules on energy efficiency are a great example here. The EU mandated that white goods carry labels showing the score that they got on energy-efficiency tests. The labelling added information to customer and influenced their purchasing decisions. This created demand for more energy-efficient goods and the market responded by providing them. The eventually regulations banned goods below a certain efficiency rating but it was largely unnecessary because the market adjusted and most things were A rated or above when F ratings were introduced. It worked so well that they had to recalibrate the scale.
I can see how such usurpation could distort my view.
Well… yeah.
I love this example. Plainly shows that often people don’t make the choices they do because they don’t care about such and such criterion, they do so because they just can’t measure the criterion even if they cared. Even a Libertarian should admit that making good purchase decisions requires being well informed.
To be honest I do believe some select parts of the economy should be either centrally planned or have a state provider that can serve everyone: roads, trains, water, electricity, schools… Yet at the same time, other sectors probably benefit more from a Libertarian approach. My favourite example is the Internet: the fibre should be installed by public instances (town, county, state…), and bandwidth rented at a flat rate — no discount for bigger volumes. And then you just let private operators rent the bandwidth however they please, and compete among each other. The observed results in the few places in France that followed this plan (mostly rural areas big private providers didn’t want to invest in) was a myriad of operators of all sizes, including for-profit and non-profit ones (recalling what Benjamin Bayart said of the top of my head). This gave people an actual choice, and this diversity inherently makes this corner of the internet less controllable and freer.
A Libertarian market on top of a Communist infrastructure. I suspect we can find analogues in many other domains.
This is great initially, but it’s not clear how you pay for upgrades. Presumably 1 Gb/s fibre is fine now, but at some point you’re going to want to migrate everyone to 10 Gb/s or faster, just as you wanted to upgrade from copper to fibre. That’s going to be capital investment. Does it come from general taxation or from revenue raised on the operators? If it’s the former, how do you ensure it’s equitable, if it’s the latter then you’re going to want to amortise the cost across a decade and so pricing sufficiently that you can both maintain the current infrastructure and save enough to upgrade to as-yet-unknown future technology can be tricky.
The problem with private ownership of utilities is that it encourages rent seeking and cutting costs at the expense of service and capital investment. The problem with public ownership is that it’s hard to incentivise efficiency improvements. It’s important to understand the failure modes of both options and ideally design hybrids that avoid the worst problems of both. The problem is that most politicians start with ‘privatisation is good’ or ‘privatisation is bad’ as an ideological view and not ‘good service, without discrimination, at an affordable price is good’ and then try to figure out how to achieve it.
Yes, that’s the point: the most capitalistic something is (extreme example: nuclear power plants), the more difficult private enterprises will want to invest in it, and if they do, the more they will want to extract rent from their investment. There’s also the thing about fibre (or copper) being naturally monopolistic, at least if you have a mind to conserve resources and not duplicate lines all over the place.
So there is a point where people must want the thing badly enough that the town/county/state does the investment itself. As it does for any public infrastructure.
Not saying this would be easy though. The difficulties you foresee are spot on.
Ah, I see. Part of this can be solved by making sure the public part is stable, and the private part easy to invest on. For instance, we need boxes and transmitters and whatnot to lighten up the fibre. I speculate that those boxes are more liable to be improved than the fibre itself, so perhaps we could give them to private interests. But this is reaching the limits of my knowledge of the subject, I’m not informed enough to have an opinion on where the public/private frontier is best placed.
Good point, I’ll keep that in mind.
There’s a lot of nuance here. Private enterprise is quite good at high-risk investments in general (nuclear power less so because it’s regulated such that you can’t just go bankrupt and walk away, for good reasons). A lot of interesting infrastructure were possible because private investors gambled and a lot of them lost a big pile of money. For example, the Iridium satellite phone network cost a lot to deliver and did not recoup costs. The initial investors lost money, but then the infrastructure was for sale at a bargain price and so it ended up being operated successfully. It’s not clear to me how public investment could have matched that (without just throwing away tax payers’ money).
This was the idea behind some of the public-private partnership things that the UK government pushed in the ‘90s (which often didn’t work, you can read a lot of detailed analyses of why not if you search for them): you allow the private sector to take the risk and they get a chunk of the rewards if the risk pays off but the public sector doesn’t lose out if the risk fails. For example, you get a private company to build a building that you will lease from them. They pay all of the costs. If you don’t need the building in five years time then it’s their responsibility to find another tenant. If the building needs unexpected repairs, they pay for them. If everything goes according to plan, you pay a bit more for the building space than if you’d built, owned, and operated it yourself. And you open it out to competitive bids, so if someone can deliver at a lower cost than you could, you save money.
Some procurement processes have added variations on this where the contract goes to the second lowest bidder or they the winner gets paid what the next-lowest bidder asked for. The former disincentivises stupidly low bids (if you’re lower than everyone else, you don’t win), the latter ensures that you get paid as much as someone else thought they could deliver, reducing risk to the buyer. There are a lot of variations on this that are differently effective and some economists have put a lot of effort into studying them. Their insights, sadly, are rarely used.
The dangerous potholes throughout UK roads might warn you that this doesn’t always work.
Good point. We need to make sure that these gambles stay gambles, and not, say, save the people who made the bad choice. Save their company perhaps, but seize it in the process. We don’t want to share losses while keeping profits private — which is what happens more often than I’d like.
The intent is good indeed, and I do have an example of a failure in mind: water management in France. Much of it is under a private-public partnership, with Veolia I believe, and… well there are a lot of leaks, a crapton of water is wasted (up to 25% in some of the worst cases), and Veolia seems to be making little more than a token effort to fix the damn leaks. Probably because they don’t really pay for the loss.
It’s often a matter oh how much money you want to put in. Public French roads are quite good, even if we exclude the super highways (those are mostly privatised, and I reckon in even better shape). Still, point taken.
Were they actually successful, or did they only decrease operating energy use? You can make a device that uses less power because it lasts half as long before it breaks, but then you have to spend twice as much power manufacturing the things because they only last half as long.
I don’t disagree with your comment, by the way. Although, part of the problem with planned economies was that they just didn’t have the processing power to manage the entire economy; modern computers might make a significant difference, the only way to really find out would be to set up a Great Leap Forward in the 21st century.
I may be misunderstanding your question but energy ratings aren’t based on energy consumption across the device’s entire lifetime, they’re based on energy consumption over a cycle of operation of limited duration, or a set of cycles of operations of limited duration (e.g. a number of hours of functioning at peak luminance for displays, a washing-drying cycle for washer-driers etc.). You can’t get a better rating by making a device that lasts half as long.
Energy ratings and device lifetimes aren’t generally linked by any causal relation. There are studies that suggest the average lifetime for (at least some categories of) household appliances have been decreasing in the last decades, but they show about the same thing regardless of jurisdiction (i.e. even those without labeling or energy efficiency rules, or with different labeling rules) and it’s a trend that started prior to energy efficiency labeling legislation in the EU.
Not directly, but you can e.g. make moving parts lighter/thinner, so they take less power to move but break sooner as a result of them being thinner.
That’s good to hear.
For household appliances, energy ratings are given based on performance under full rated capacities. Moving parts account for a tiny fraction of that in washing machines and washer-driers, and for a very small proportion of the total operating power in dishwashers and refrigerators (and obviously no proportion for electronic displays and lighting sources). They’re also given based on measurements of KWh/cycle rounded to three decimal places.
I’m not saying making some parts lighter doesn’t have an effect for some of the appliances that get energy ratings, but that effect is so close to the rounding error that I doubt anyone is going to risk their warranty figures for it. Lighter parts aren’t necessarily less durable, so if someone’s trying to get a desired rating by lightening the nominal load, they can usually get the same MTTF with slightly better materials, and they’ll gladly swallow some (often all) of the upfront cost just to avoid dealing with added uncertainty of warranty stocks.
Much like orthodox Marxism-Leninism, the Austrian School describes economics by how it should be, not how it actually is.
The major problem with orphans was lack of access to proprietary parts – they were otherwise very repairable. The few manufacturers that can afford proprietary parts today (e.g. Apple) aren’t exactly at risk of going under, which is why that fear is all but gone today.
I have like half a dozen orphan boxes in my collection. Some of them were never sold on Western markets, I’m talking things like devices sold only on the Japanese market for a few years or Soviet ZX Spectrum clones. All of them are repairable even today, some of them even with original parts (except, of course, for the proprietary ones, which aren’t manufactured anymore so you can only get them from existing stocks, or use clone parts). It’s pretty ridiculous that I can repair thirty year-old hardware just fine but if my Macbook croaks, I’m good for a new one, and not because I don’t have (access to) equipment but because I can’t get the parts, and not because they’re not manufactured anymore but because no one will sell them to me.
Deregulation was certainly meant to achieve a lot of things in particular. Not just general outcomes, like a more competitive landscape and the like – every major piece of deregulatory legilslation has had concrete goals that it sought to achieve. Most of them actually achieved them in the short run – it was conserving these achievements that turned out to be more problematic.
As for companies not being able to force customers not to upgrade, repair or tinker with their devices, that is really not true. Companies absolutely can and do force customers to not upgrade or repair their devices. For example, they regularly use exclusive supply deals to ensure that customers can’t get the parts they need for it, which they can do without leveraging any government-mandated regulation.
Some of their means are regulation-based – e.g. they take them customers or third-parties to court (see e.g. Apple. For most devices, tinkering with them in unsupported ways is against the ToS, too, and while there’s always doubt on how much of that is legally enforceable in each jurisdiction out there, it still carries legal risk, in addition to the weight of force in jurisdictions where such provisions have actually been enforced.
This is very far from a state of minimal initiation of force. It’s a state of minimal initiation of force on the customer end, sure – customers have little financial power (both individually and in numbers, given how expensive organisation is), so in the absence of regulation they can leverage, they have no force to initiate. But companies have considerable resources of force at their disposal.
It’s not like there was heavy progress the last 10 years on smartphone hardware.
Since 2015 every smartphone is the same as the previous model, with a slightly better camera and a better chip. I don’t see how the regulation is making progress more difficult. IMHO it will drive innovation, phones will have to be made more durable.
And, for most consumers, the better camera is the only thing that they notice. An iPhone 8 is still massively overpowered for what a huge number of consumers need, and it was released five years ago. If anything, I think five years is far too short a time to demand support.
Until that user wants to play a mobile game–in which like PC hardware specs were propelled by gaming, so is the mobile market driven by games which I believe is now the most dominant gaming platform.
I don’t think the games are really that CPU / GPU intensive. It’s definitely the dominant gaming platform, but the best selling games are things like Candy Crush (which I admit to having spent far too much time playing). I just upgraded my 2015 iPad Pro and it was fine for all of the games that I tried from the app store (including the ones included with Netflix and a number of the top-ten ones). The only thing it struggled with was the Apple News app, which seems to want to preload vast numbers of articles and so ran out of memory (it had only 2 GiB - the iPhone version seems not to have this problem).
The iPhone 8 (five years old) has an SoC that’s two generations newer than my old iPad, has more than twice as much L2 cache, two high-performance cores that are faster than the two cores in mine (plus four energy-efficient cores, so games can have 100% use of the high-perf ones), and a much more powerful GPU (Apple in-house design replacing a licensed PowerVR one in my device). Anything that runs on my old iPad will barely warm up the CPU/GPU on an iPhone 8.
But a lot are intensive & enthusiasts often prefer it. But still those time-waster types and e-sports tend to run on potatoes to grab the largest audience.
Anecdotally, I recently was reunited with my OnePlus 1 (2014) running Lineage OS, & it was choppy at just about everything (this was using the apps from when I last used it (2017) in airplane mode so not just contemporary bloat) especially loading map tiles on OSM. I tried Ubuntu Touch on it this year (2023) (listed as great support) & was still laggy enough that I’d prefer not to use it as it couldn’t handle maps well. But even if not performance bottle-necked, efficiency is certainly better (highly doubt it’d save more energy than the cost of just keeping an old device, but still).
My OnePlus 5T had an unfortunate encounter with a washing machine and tumble dryer, so now the cellular interface doesn’t work (everything else does). The 5T replaced a first-gen Moto G (which was working fine except that the external speaker didn’t work so I couldn’t hear it ring. I considered that a feature, but others disagreed). The Moto G was slow by the end. Drawing maps took a while, for example. The 5T was fine and I’d still be using it if I hadn’t thrown it in the wash. It has an 8-core CPU, 8 GiB of RAM, and an Adreno 540 GPU - that’s pretty good in comparison to the laptop that I was using until very recently.
I replaced the 5T with a 9 Pro. I honestly can’t tell the difference in performance for anything that I do. The 9 Pro is 4 years newer and doesn’t feel any faster for any of the apps or games that I run (and I used it a reasonable amount for work, with Teams, Word, and PowerPoint, which are not exactly light apps on any platform). Apparently the GPU is faster and the CPU has some faster cores but I rarely see anything that suggests that they’re heavily loaded.
Original comment mentioned iPhone 8 specifically. Android situation is completely different.
Apple had a significant performance lead for a while. Qualcomm just doesn’t seem to be interested in making high-end chips. They just keep promising that their next-year flagship will be almost as fast as Apple’s previous-year baseline. Additionally there are tons of budget Mediatek Androids that are awfully underpowered even when new.
Flagship Qualcomm chips for Android chips been fine for years & more than competitive once you factor in cost. I would doubt anyone is buying into either platform purely based on performance numbers anyhow versus ecosystem and/or wanting hardware options not offered by one or the other.
That’s what I’m saying — Qualcomm goes for large volumes of mid-range chips, and does not have products on the high end. They aren’t even trying.
BTW, I’m flabbergasted that Apple put M1 in iPads. What a waste of a powerful chip on baby software.
Uh, what about their series 8xx SoC’s? On paper they’re comparable to Apple’s A-series, it’s the software that usually is worse.
Still a massacre.
Yeah, true, I could have checked myself. Gap is even bigger right now than two years ago.
Q is in self-inflicted rut enabled by their CDMA stranglehold. Samsung is even further behind because their culture doesn’t let them execute.
https://cdn.arstechnica.net/wp-content/uploads/2022/09/iPhone-14-Geekbench-5-single-Android-980x735.jpeg
https://cdn.arstechnica.net/wp-content/uploads/2022/09/iPhone-14-Geekbench-Multi-Android-980x735.jpeg
Those are some cherry-picked comparisons. Apple release on a different cadence. You check right now, & S23 beats up on it as do most flagship now. If you blur the timing, it’s all about the same.
It would cost them more to develop and commission to fabrication of a more “appropriate” chip.
The high-end Qualcomm is fine. https://www.gsmarena.com/compare.php3?idPhone1=12082&idPhone3=11861&idPhone2=11521#diff- (may require viewing as a desktop site to see 3 columns)
With phones of the same tier released before & after you can see benchmarks are all close as is battery life. Features are wildly different tho since Android can offer a range of different hardware.
It doesn’t for laptops[1], so I doubt it would for smartphones either.
[1] https://www.lowtechmagazine.com/2020/12/how-and-why-i-stopped-buying-new-laptops.html
I think you’re really discounting the experiences of consumers to say they don’t notice the UI and UX changes made possible on the Android platform by improvements in hardware capabilities.
I notice that you’re not naming any. Elsewhere in the thread, I pointed out that I can’t tell the difference between a OnePlus 5T and a 9 Pro, in spite of them being years apart in releases. They can run the same version of Android and the UIs seem identical to me.
I didn’t think I had to. Android 9, 10, 11, 12 have distinct visual styles, and between vendors this distinction can further - this may be less apparent on OnePlus as they use their own OxygenOS (AOSP upstream ofc) (or at least, used to), but consumers notice even if they can’t clearly relate what they’ve noticed.
I’m using LimeageOS and both phones are running updated versions of the OS. Each version has made the settings app more awful but I can’t point to anything that’s a better UI or anything that requires newer hardware. Rendering the UI barely wakes up the GPU on the older phone. So what is new, better, and is enabled by newer hardware?
I can’t argue either way for “better”, I’m not the market. Newer hardware generally has better capability for graphics processing, leading to more reactive displays at higher refresh rates, and enabling compositing settings and features that otherwise wouldn’t run at an acceptable frame rate.
LineageOS is an AOSP build specifically designed to run fast and support legacy hardware, and is designed to look the same on all that hardware. It’s not a fair comparison to what people like to see with smartphone interfaces and launchers etc.
So please name one of them. A 2017 phone can happily run a 1080p display at a fast enough refresh that I’ve no idea what it is because it’s faster than my eyes can detect, with a full compositing UI. Mobile GPUs have been fast enough to composite every UI element from a separate texture, running complex pixel shaders on them, for ten years. OS X started doing this on laptops over 15 years ago, with integrated Intel graphics cards that are positively anaemic in comparison to anything in a vaguely recent phone. Android has provided a compositing UI toolkit from day one. Flutter, with its 60FPS default, runs very happily on a 2017 phone.
If it helps, I’m actually using the Microsoft launcher on both devices. But, again, you’re claiming that there are super magic UI features that are enabled by new hardware without saying what they are.
All innovation isn’t equal. Innovation that isn’t wanted by customers or their suppliers is malinvestment - a waste of human capacity, wealth, and time.
What makes you think that this innovation is not wanted by customers?
There is innovation that is wanted by customers, but manufacturers don’t provide it because it goes against their interest. I think it’s a lie invisible-hand-believers tell themselves when claiming that customers have a choice between a fixable phone and a glued-phone with an appstore. Of course customers will chose the glued-phone with an app store, because they want a usable phone first. But this doesn’t mean they don’t want a fixable phone, it means that they were given a Hobson’s choice
The light-bulb cartel is the single worst example you could give; incandescent light-bulbs are dirt-cheap to replace and burning them hotter ends up improving the quality of their light (i.e. color) dramatically, while saving more in reduced power bills than they cost from shorter lifetimes. This 30min video by Technology Connections covers the point really well.
Okay, that was sloppy of me.
“Not wanted more than any of the other features on offer.”
“Not wanted enough to motivate serious investment in a competitor.”
That last is most telling.
This cynical view is unwarranted in the case of EU, which so far is doing pretty well avoiding regulatory capture.
EU has a history of actually forcing companies to innovate in important areas that they themselves wouldn’t want to, like energy efficiency and ecological impact. And their regulations are generally set to start with realistic requirements, and are tightened gradually.
Not everything will sort itself out with consumers voting with their wallets. Sometimes degenerate behaviors (like vendor lock-in, planned obsolescence, DRM, spyware, bricking hardware when subscription for it expires) universally benefit companies, so all choices suck in one way or another. There are markets with high barriers to entry, especially in high-end electronics, and have rent-seeking incumbents that work for their shareholders’ interests, not consumers.
Ecodesign worked out wonderfully for vacuum cleaners, but that’s an appliance that hasn’t meaningfully changed since the 1930s. (You could argue that stick vacuum cleaners are different, but ecodesign certainly didn’t prevent them from entering the market)
The smartphone market has obviously been stagnating for a while, so it’ll be interesting to see if ecodesign can shake it up.
I strongly disagree here. They’ve changed massively since the ’90s. Walking around a vacuum cleaner shop in the ’90s, you had two choices of core designs. The vast majority had a bag that doubled as an air filter, pulling air through the bag and catching dust on the way. This is more or less the ’30s design (though those often had separate filters - there were quite a lot of refinements in the ’50s and ’60s - in the ’30s they were still selling ones that required a central compressor in the basement with pneumatic tubes that you plugged the vacuum cleaner into in each room).
Now, if you buy a vacuum cleaner, most of them use centrifugal airflow to precipitate heavy dust and hair, along with filters to catch the finer dust. Aside from the fact that both move air using electric motors, this is a totally different design to the ’30s models and to most of the early to mid ’90s models.
More recently, cheap and high-density lithium ion batteries have made cordless vacuums actually useful. These have been around since the ‘90s but they were pointless handheld things that barely functioned as a dustpan and brush replacement. Now they’re able to replace mains-powered ones for a lot of uses.
Oh, and that’s not even counting the various robot ones that can bounce around the floor unaided. These, ironically, are the ones whose vacuum-cleaner parts look the most like the ’30s design.
Just to add to that, the efficiency of most electrical home appliances has improved massively since the early ‘90s. With a few exceptions, like things based on resistive heating, which can’t improve much because of physics (but even some of those got replaced by devices with alternative heating methods) contemporary devices are a lot better in terms of energy efficiency. A lot of effort went into that, not only on the electrical end, but also on the mechanical end – vacuum cleaners today may look a lot like the ones in the 1930s but inside, from materials to filters, they’re very different. If you handed a contemporary vacuum cleaner to a service technician from the 1940s they wouldn’t know what to do with it.
Ironically enough, direct consumer demand has been a relatively modest driver of ecodesign, too – most consumers can’t and shouldn’t be expected to read power consumption graphs, the impact of one better device is spread across at least a two months’ worth of energy bills, and the impact of better electrical filtering trickles down onto consumers, so they’re not immediately aware of it. But they do know to look for energy classes or green markings or whatever.
The eco labelling for white goods was one of the inspirations for this law because it’s worked amazingly well. When it was first introduced, most devices were in the B-C classification or worse. It turned out that these were a very good nudge for consumers and people were willing to pay noticeably more for higher-rated devices, to the point that it became impossible to sell anything with less than an A rating. They were forced to recalibrate the scheme a year or two ago because most things were A+ or A++ rated.
It turns out that markets work very well if customers have choice and sufficient information to make an informed choice. Once the labelling was in place, consumers were able to make an informed choice and there was an incentive for vendors to provide better quality on an axis that was now visible to consumers and so provided choice. The market did the rest.
Labeling works well when there’s a somewhat simple thing to measure to get the rating of each device - for a fridge it’s power consumption. It gets trickier when there’s no easy way to determine which of two devices is “better” - what would we measure to put a rating on a mobile phone or a computer?
I suppose the main problem is that such devices are multi-purpose - do I value battery life over FLOPS, screen brightness over resolution, etc. Perhaps there could be a multi-dimensional rating system (A for battery life, D for gaming performance, B for office work, …), but that gets unpractical very quickly.
There’s some research by Zinaida Benenson (I don’t have the publication to hand, I saw the pre-publication results) on an earlier proposal for this law that looked at adding two labels:
The proposal was that there would be statutory fines for devices that did not comply with the SLA outlined in those two labels but companies are free to put as much or as little as they wanted. Her research looked at this across a few consumer good classes and used the standard methodology where users were shown a small number of devices with different specs and different things on these labels and then asked to pick their preference. This was then used to vary price, features, and security SLA. I can’t remember the exact numbers but she found that users consistently were willing to select higher priced things with better security guarantees, and favoured them over some other features.
All the information I’ve read points to centrifugal filters not being meaningfully more efficient or effective than filter bags, which is why these centrifugal cylones are often backed up by traditional filters. Despite what James Dyson would have us believe, building vacuum cleaners is not like designing a Tokamak. I’d use them as an example of a meaningless change introduced to give consumers an incentive to upgrade devices that otherwise last decades.
Stick (cordless) vacuums are meaningfully different in that the key cleaning mechanism is no longer suction force. The rotating brush provides most of the cleaning action, coupled with a (relatively) weak suction provided by the cordless engines. This makes them vastly more energy-efficient, although this is probably cancelled out by the higher impact of production, and the wear and tear on the components.
It also might be a great opportunity for innovation in modular design. Say, Apple is always very proude when they come up with a new design. Remember a 15 min mini-doc on their processes when they introduced unibody macbooks? Or 10 min video bragging about their laminated screens?
I don’t see why it can’t be about how they designed a clever back cover that can be opened without tools to replace the battery and also waterproof. Or how they came up with a new super fancy screen glass that can survive 45 drops.
Depending on how you define “progress” there can be a plenty of opportunities to innovate. Moreover, with better repairability there are more opportunities for modding. Isn’t it a “progress” if you can replace one of the cameras on your iPhone Pro with, say, infrared camera? Definitely not a mainstream feature to ever come to mass-produced iPhone but maybe a useful feature for some professionals. With available schematics this might have a chance to actually come to market. There’s no chance for it to ever come to a glued solid rectangle that rejects any part but the very specific it came with from the factory.
Phones have not made meaningful progress since the first few years of the iphone. Its about time
That’s one way to think about it. Another is that shaping markets is one of the primary jobs of the government, and a representative government – which, for all its faults, the EU is – delegates this job to politics. And folks make a political decision on the balance of equities differently, and … well, they decide how the markets should look. I don’t think that “innovation” or “efficiency” at providing what the market currently provides is anything like a dispositive argument.
thank god
There’s a chance that tech companies start to make EU-only hardware.
This overall shift will favor long term R&D investments of the kind placed before our last two decades of boom. It will improve innovation in the same way that making your kid eats vegetables improves their health. This is necessary soil for future booms.
What the hell does “manufacturers will have to make compatible software updates available for at least 5 years” mean? Who determines what bugs now legally need to be fixed on what schedule? What are the conditions under which this rule is considered to have been breached? What if the OS has added features that are technically possible on older devices but would chew up batteries too quickly because of missing accelerators found on newer devices? This is madness.
Took me all of 10 minutes to find the actual law instead of a summary. All of the questions you have asked have pretty satisfying answers there IMO.
From the “Operating system updates” section:
If you fix a bug or security issue, you have to backport it to all models you’ve offered for sale in the past 5 years. If your upstream (android) releases a security fix, you must release it within 4 months (part c of that section).
Part F says if your updates slow down old models, you have to restore them to good performance within “a reasonable time” (yuck, give us a number). An opt-in feature toggle that enables new features but slows down the device is permitted, which I suspect is how your last question would be handled in practice.
That’s going to cause a lot of fly-by-night vendors to abandon the market I imagine :)
I wonder how they determine what is “the same operating system”
Good. The world is much better off without them.
The real scary thing is that it took users weeks to notice that it shipped, despite that it wasn’t obfuscated in any way. This shows how risky the ecosystem is, without enough eyes reviewing published crates. If any high profile crate author gets infected with malware that injects itself into crates, it’s going to be an apocalypse for Rust.
Maybe this is also a sign that the complaints themselves were incoherent?
I think it’s only a sign that we’re unaware until this hits a sandboxed / reproducable build system. I guess that’s currently distribution packaging or projects that otherwise use Nix or Bazel to build.
But it highlights how little else is sandboxed.
Exactly, these complaints are incoherent unless you were already doing the things that would cause you to notice the change!
I’m not sure that “No one’s looking anyway, so it’s totally fine” is the right takeaway from this.
If the complaint is that binaries are more difficult to audit than source, and no one is auditing, then it should make no difference either way from a security perspective.
it is perfectly coherent to advocate for other people.
I think “weeks” is a bit of an exaggeration. People were openly discussing it at least a week after release. It’s true though that it didn’t blow up on social media until weeks later and many people didn’t realise until then.
If it had been a security issue or it was done by someone much less reputable than the author of serde or if the author did not respond then I suspect rustsec may have been more motivated to post an advisory.
Something that I might have expected to see included in this comment, and that I instead will provide myself, is a plug for bothering to review the code in one’s (prospective) dependencies, or to import reviews from trusted other people (or, put differently, to limit oneself to dependencies that one is able and willing to review or that someone one trusts has reviewed).
I recall that kornel at least used to encourage the use of
cargo-crev
, and their Lib.rs now also shows reviews from the newer and more streamlinedcargo-vet
.I note that the change adding the blob to Serde was reviewed and approved through
cargo-vet
by someone at Mozilla. I don’t think that necessarily means these reviewing measures would not be useful in a situation that isn’t as much a drill (i.e., with a blob more likely to be malicious).Yeah - my recollection of crev is that libraries like serde often got reviews like “it’s serde, might as well be the stdlib, I trust this without reviewing it as the chances of it being malicious are basically zero”
—Andrew S. Tenenbaum (author of Minix)
Ah, the days when the entire campus shared a T-1 connection and it was just easier for UCB to mail you a 9-track of BSD 4.2 than to try and download it.
Apparently he wrote this in 1981 [1]. It’s amazing how over 40 years later, the premise still holds true. There seems to be a fundamental principle at play that could not be broken with technological innovations. Makes me wonder if there is a physical law that limits the speed of transfer of information depending on the mass and energy that is used to transmit it.
[1] https://what-if.xkcd.com/31/
Reminds me of the AWS Snowmobile product for the transport of exabyte-scale datasets into and out of the AWS datacentre. Essentially a SAN on wheels.
We’re moving a ton of data around these days, for $REASONS. I have repeatedly pointed to this example when people respond to the station wagon analogy with “that’s just something you oldsters say…that doesn’t matter now”. Suddenly, a dim light goes on (an LED, not a light bulb).
As that xkcd says, “all of that data is coming from somewhere, and going somewhere.”
To really hammer home how quickly the price of storage is still shrinking, the article suggests about 900-1000 USD for a 1TB SSD, but you can buy a 1TB Western Digital Black NVMe on Amazon.com right now for $50.
Something beyond Shannon-Hartley theorem?
Perhaps? What’s the signal power of matter you take along? You just take
e = mc²
? I haven’t thought this through in detail, I was just wondering as I wrote the comment.Thinking out loud here:
With digital data transfer you have to essentially take 1 electron at a time to the new destination.
With physical data transfer you can take trillions of electrons at a time to the new destination.
Your comment about e = mc² really does seem to touch this. We can only fling 1 electron near / at the speed of light at a time. So there’s a hard limit on how much data / second you can do. We can carry tons of electrons though at a time.
So we’re optimizing for “sending the least information that makes the most sense to us in the least time”, more or less. We don’t need to be sending each other 100GB videos, and we don’t. It’s much faster to send a 1GB video that is as recognizable as a 10GB video over the internet.It comes down to “we want to see something as fast as possible”. i.e. what most people say about this topic: optimizing for latency.I think it’s better to see things as “digital objects”, like for example, if you wanted the Ethereum blockchain. It’d be faster to get it on a flash drive than downloading it for most people, but it can’t be broken down further than its “useful unit” i.e. itself.
100% awesome thought :)
Someone could probably graph when these thresholds cross? :o
Most bulk data transfer these days is over fibre. With modern fibre, you can send multiple wavelengths simultaneously. The upper limit on this is one symbol per photon, though I’m not sure that there is a theoretical limit on the number of distinct symbols. Practical limits depend on the discrimination of the receiver and the flexibility of the bends (different wavelengths travel at different speeds because they retract differently so unless you have a perfectly straight fibre and send to photons down perfectly straight, there are limits on how closely you can space them).
Similarly, flash doesn’t store one bit per electron. Each cell holds a charge and that encodes one symbol. With MLC, there are typically 4-8 possible values per cell. In theory, you could build storage where you use different energy levels in electrons on a single atom to represent this. Quantum mechanics tells you both how many possible symbol values you can store and why it’s really hard to build something useful like this.
For both forms of transfer, the practical limitations of the specific technology have much more impact than the theoretical limits.
It’s also worth noting that most of these comparisons ignore the bus speed on the removable storage. My home Internet can download (but not upload, because my ISP is stupid) data faster than I can read from cheap flash drives. Commercial fibre is faster that most local storage unless you are able to read and write a lot in parallel.
Wow, didn’t know fibre could do that :o sick.
Yeah, the electron bit was just to simplify thinking around things, but you’re totally right. And good point about bus speed. I think the counter argument there is “you have the data at your disposal now”, so you don’t need to read it all off onto your primary storage device?
Last night before sleeping I basically concluded that if you can just add more wires (and I guess as you explained, wavelengths) then you can always beat the “practical carry limit” of physical data transfer… Except it seems you can always store more before you can transfer faster, at least that’s the trend right?
Innovation in tape storage density is a big factor.
I’d argue the principle is Latency Bandwidth relation.
You can absolutely start pushing enough bandwidth to beat the pidgeon over most ordinary wire. But the protocol involved will mean your latency will suffer and normal usage will be near impossible.
For example, you could write a protocol that spends a minute probing out the exact physical properties of the transfer wires between source and destination so it can model the wire and figure out how to blast through a 100 gigabit packet of data once. The receiver will be spending that hour much the same trying to probe out the wire. Once the blast is complete you’d have to remeasure, as it’s now another minute and the wire might have warmed up, changing properties. Plus the receiver now needs to process this gigabit blast and ACK the reception (even just hashing it to make sure it got there will take a moment with 100 gigabits of data). Retransmissions will cost more time. But ultimately this incredibly horrific latency buys you a lot of bandwidth on average, not a lot in the median.
On the flipside, you can not do that and get much lower average bandwidth but much more usable latency guarantees, plus a better median bandwidth.
That ad blocker plea seems like a great case study of how fundamentally ad supported content is at odds with ethical disclosure. The plea is fairly convincing but when I went to the uBlock panel to allow Ethical Ads I saw it is using DoubleClick.
Ethical ads are only ethical under the (misguided, imo) framing that ads are bad because of the surveillance & privacy invasion, rather than the framing that ads are bad because they are researched attempts at modifying your behavior to suit corporate ends.
If you want to make money off of me find a less scummy way than subtly massaging my personality. Exposing me to visual/linguistic malware.
Way off topic, but I think you can split ads into two categories:
The former class is essential to business and, I would argue, is ethical. A lot of early Google ads were like this, they told you about a thing that’s relevant to the thing you’re reading about. A lot of magazine articles in technical magazines are like this too: they include technical specifications and other information that you need to make an informed decision. I used to buy a magazine called Computer Shopper that was well over 50% ads, and I bought it in a large part because of the ads: they gave me a great overview of what was available and at what prices from a load of different manufacturers. A lot of trade magazines are like this.
I would love to see something that properly incentivises this kind of advert. I strongly suspect that, for companies that have a decent product, it actually works better because it gets their product to the people who others are most likely to ask for recommendations.
Offtopic: I’d argue that any ad is in the second category: manipulation. The sole purpose of an ad is to hijaak your attention to imprint some information on you. Making me aware without asking that a certain tv exist that costs X and has feature Y is manipulating me to take it in consideration for buying.
A business that only wants to publish information about the tv could just create a product page or website.
Ontopic: this was my favorite talk of the conference! Both technical and entertaining.
For what it’s worth, I do actually have code that forbids the ads for people visiting from Lobsters, but Lobsters links are noreferrer.
Precisely! I don’t mind publishers getting a bunch of semi-anonymous details about me. I mind every search and every website I make in the future becoming less helpful, less interesting, and less varied by everyone involved thinking they know which of my interests I might be willing to spend money on at any particular time.
I have since hosted the video on my CDN, meaning that DoubleClick is no longer integrated into that page. That was the YouTube iframe in action.
I think the DoubleClick ads are the YouTube iframe. I can’t really do anything about that unless I repost the video on my CDN, which I don’t have the energy to do today. Maybe tomorrow.
Then obviously you shouldn’t tell people to turn off their ad blockers, as most ad blockers will allow those DoubleClick scripts when disabled for your site.
I have just replaced the YouTube embed with a copy of the video hosted on my CDN.
One reason I’m excited about the digital-€ project is that they are aiming for very low, even free, transaction costs, which would be key for micropayments. So I could actually pay something for articles I’d like to read. And hopefully people wouldn’t pay for the content that is now pushed just for getting you to view ads. There’s also https://webmonetization.org/
Micropayments have been hyped since forever, and haven’t really taken off. I honestly believe it’s a social issue, not a technological one.
I think there are some big psychological problems with micropayments. People aren’t good at estimating tiny numbers multiplied by other numbers that they’re also bad at estimating. How many web pages do you visit per month? How much will you pay if you are paying each one 0.1¢ per view? What about 1¢ or 0.01¢? I honestly couldn’t tell you how much I’d be paying per month for these.
A big part of the success of things like Spotify and Netflix is that they are able to offer a flat rate that lets you avoid thinking about this. If you could pay $10/month to support content creators and have a privacy-preserving system that would reward the ones that you valued, I think a lot of people would sign up. I actually designed such a system and pitched it to the chief economist at MS a few years back. Sadly, the feedback was that, if we tried to deploy such a system, Google would be in a position to scale it up faster (and it’s the kind of thing where second place is largely worthless), so it didn’t make commercial sense to do so.
I don’t know if the problem is actually the nature of micropayments. Tens of millions of people already use the micropayment systems in video games where the currencies cost real money - because they develop loose heuristics for how much virtual items are worth. Sure, they usually buy far fewer items than one might imagine for a internet-scale content micropayment system - but I don’t see any reason that that intuition that is proven to work currently wouldn’t scale to larger volumes, especially because many tens of thousands play games such as EVE Online where transaction volumes do approach (or exceed) that of the real world.
At the very least, having a subscription-style system like the one that you describe would provide the infrastructure to test such a microtransaction system.
Do you have a citation for that? Last time I looked at the economics of these systems, the overwhelming majority of spending came from a tiny fraction of players, which the industry referred to as ‘whales’. The games are specifically designed to trigger an addition response in vulnerable people and make Google Ads look ethical in comparison.
This study[1] had 688 of 1000 (69%) respondents self-report that they spent money on Fortnite. In 2020, Fortnite had ~80 million monthly active users (discarding the 320 million users that aren’t currently active). We’re not going to get data from the company itself, but it’s highly plausible that tens of millions of people engage in Fortnite’s microtransaction system alone, ignoring smaller systems like EVE Online and Diablo 4 and Genshin Impact (and all the other gatcha games).
While we don’t have the statistical distribution of spending patterns, the fact that millions (minimum) of people use these systems mean that even if the industry of free-to-play games is mostly exploitative (which I agree, it is), millions of people have at least some experience with these systems, and it’s highly probable that there’s a significant intersection between active users and paid users (that is, that many users are familiar with the system, as opposed to only having touched it once and never again).
From first principles - humans don’t have a natural intuition for “real” money and “normal” purchase patterns, either - it has to be learned. I don’t see any plausible reason to believe that it’s substantially more difficult for humans to learn to use a microtransaction system than the “normal” ones that we have now.
As to your earlier point:
…people will develop that ability to estimate and/or budget if they actually have to use such a system. You could make a similar claim about our current financial system - “People are bad at estimating how much it costs to buy a $6 Starbucks every workday for a month” - and while initially that’s true, over time some people do learn to get good at estimating that, while other people will actually sit down and work out how much that costs - it’s the same mechanic for microtransactions.
I opened Firefox’s history, selected the month of July, and saw that I visited 4000 pages. At 0.1c per view, that’s $4 - not a problem. At 1c per view, that’s $40 - a reasonable entertainment budget that is comparable in size to going out for dinner with my wife once.
Yes, it took me one full minute to do that math, and that’s not instantaneous, but not everything in our lives has to be instantaneous or built-in intuition. (plus, now that I’ve computed that result, I can cache it mentally for the foreseeable future) I really think that you’re significantly overestimating the amount of work it takes to do these kinds of order-of-magnitude calculations, and under-estimating how adaptable humans are.
As for the necessity of said adaptation - I’d much rather have a microtransaction system than either a subscription one (not ideal for creators) or an ad-based one (with all of the problems that it entails) - making the purchase decisions even acts as a built-in rate limiter to gently pressure me to spend less time on entertainment, and less time distractedly skimming pages, and more time carefully and thoughtfully reading longer and more useful articles. I think that such a system would have extremely useful second-order effects for our society at large, precisely because of the need to slow down to assess whether a piece of information or entertainment is worth your money (even though your time is usually much more valuable).
[1] https://lendedu.com/blog/finances-of-fortnite/ [2] https://www.statista.com/statistics/1238914/fortnite-mau/
A true micropayment system would allow me to fund one “wallet” and buy stuff from MSFT, Nintendo, Ubisoft etc. Right now you need a separate account of each of them. That’s more a factor that their target audience usually doesn’t have access to credit cards than a sign of the success of micropayments.
I don’t understand the argument you’re making here.
I’m not claiming that the micropayment systems in video games are “real” or “general internet-scale” micropayment systems - obviously, they’re not - and I’m not claiming that micropayments are “successful”, because these video game systems are very different than the kind being discussed here.
Instead, I’m pointing out that the existence of these systems is proof that humans can use virtual/abstracted currencies to purchase intangible goods, which is basically what you need, cognitively, for a micropayment system to succeed, and I’m also trying to refute David’s claim that “People aren’t good at estimating tiny numbers multiplied by other numbers that they’re also bad at estimating.”
We should separate micropayments the technology from micropayments the interface. If we had the technology, there’s no reason you couldn’t have a client program divide a certain amount of donations per month between sites you use based on an algorithm of your choosing. Of course this arrangement is harder to profit off of because there is less room for shenanigans, but that also makes it better for everyone else.
Here’s a story from 13 years ago about Flattr and how it was trying to tackle micropayments. It had essentially that model - you prepaid and it divided the donations.
https://www.techdirt.com/2010/08/05/getting-past-the-hurdles-of-micropayments/
Flattr is apparently still around but hardly mainstream. I see way more links to Patreon, Ko-fi, and substack nowadays.
Very useful for comparison, thanks for sharing.