Standard data protection: Messages in iCloud is end-to-end encrypted when iCloud Backup is disabled. When iCloud Backup is enabled, your backup includes a copy of the Messages in iCloud encryption key to help you recover your data. If you turn off iCloud Backup, a new key is generated on your device to protect future Messages in iCloud. This key is end-to-end encrypted between your devices and isnʼt stored by Apple
Advanced Data Protection: Messages in iCloud is always end-to-end encrypted. When iCloud Backup is enabled, everything inside it is end-to-end encrypted, including the Messages in iCloud encryption key.
It’s eternally disappointing Apple don’t encourage enrolling in ADP during initial setup, considering they do encourage enrolling in both iCloud Backups and Messages in iCloud.
If you have a lot of these kinds of convos, I would highly recommend setting up explicit “on call” for team members to be in charge of handling external requests. Otherwise you can have motivated people be a bit ambitious with handling everything, get overloaded with firefighting, and end up just kinda exhausted. All while not sharing the institutional knowledge enough so they become a SPOF in an odd way.
Always good to make sure team members are taking steps forwards, of course. I just think that when people are doing this on rotation then it removes a bit of variability. Not a hard and fast rule of course.
In addition to an explicit ‘on call,’ there were two other practices at my former employer that I think advance this philosophy.
One, we had a policy that a customer could not be redirected more than three times. If you were last in the line, you had to hold the ticket to completion rather than redirect to another team. The one time I was the one holding the ticket, the client was seeing random errors throughout the product. As it turned out, they had a HTTP proxy on their side that was randomly failing requests (but only on certain domains), but the policy forced someone to fully investigate rather than keep on passing the buck once symptoms could be ascribed to a different team.
Secondly, as the company grew, we added an ‘engineer support’ role that could support the on-calls. They could handle long-term investigations and support jobs that were longer than a week, but not big or long enough to warrant an actual project.
Totally agree with your advice for an explicit “on call” during business-hours.
Crucially, moving support out of DMs and into public channels means others can search logs for advice on similar issues (and sometimes even answer their own questions!)
I wrote an internal bot a few years ago that syncs Slack user groups with $work’s on call scheduler. Folks can say @myteam-oncall in a public channel and instantly reach the right person without overambitious members needing to be involved in triage. It’s also easy enough to say @friendlyteam-oncall and redirect folk in-place to another team without switching channels or losing context.
For me, this document raises the “Pixiv problem”. That is to say, even assuming every PDS is okay with hosting any content they are legally permitted to, some content is legal in some jurisdictions and not others.
More specifically, from reading this and the ATProto docs, my understanding is that PDSes host (that is, both store and serve) a user’s repository (that is, their posts). Based on this section:
If your PDS goes down, and you want to migrate to a new one, there’s a way to backfill the contents of the PDS from the network itself, and inform the network that your PDS has moved. It is real, meaningful account portability, and that is radically different from any similar service running today.
It seems to me that, in accepting a new user, a PDS accepts responsibility for hosing everything that user has ever posted, both legal terms and in terms of storage space and bandwidth. What if that user and their previous PDS were in a jurisdiction where, say, lolicon is legal, but the new PDS is not? This is the basic reason that most ActivityPub implementations don’t port posts upon receiving the Move activity. How does BlueSky handle this case?
Edit: I also think that this line about the did:plc scheme is a bit disingenuous:
I personally see this as an example of pragmatically shipping something, others see it as a nefarious plot. You’ll have to decide for yourself.
I think the opinion of most folks who are bsky-shy is neither of these; or, rather, it’s an example of pragmatically shipping something and then acting like its replacement is already here. If we are going to evaluate BlueSky on how DIDs might work in the future, we also have to agree to evaulate other distributed solutions on their promises rather than their realities.
This is the basic reason that most ActivityPub implementations don’t port posts upon receiving the Move activity.
I think this is a stretch. Object identifiers in ActivityPub are either null or “publicly dereferencable URIs, such as HTTPS URIs, with their authority belonging to that of their originating server” (0).
While some implementations (like honk) readily permit importing posts from data exports, these can’t fully assume the old posts’ identities and are unable to migrate likes/boosts/replies from other servers. Notably, ActivityPub’s assumption of invariant object identifiers prevented the recently shutdown queer.af instance from simply adopting a new domain outwith the .af TLD (1).
What if that user and their previous PDS were in a jurisdiction where, say, lolicon is legal, but the new PDS is not? […] How does BlueSky handle this case?
Bluesky encodes attachments to posts as a blob Type (2): these aren’t directly stored in user repositories, instead just a reference to the blob’s CID is used (3).
Where illegal content is uploaded to a PDS as a blob, the PDS can refuse to serve the blob without otherwise manipulating the user’s repository (4). Blobs can be taken down by PDS admins through the com.atproto.admin.updateSubjectStatus XRPC method, eventually landing in the ModerationService’s takedownBlob.
tl;dr: new PDS is empowered to enforce local laws by refusing to serve problematic blobs (or, if needed, the entire repository).
For better (~queer.af type situations) or worse (~lolicon), moving to a third PDS would allow a user to restore blobs that have been taken down by the second PDS by reuploading them bit-for-bit (from a backup, archive, …) while preserving the CID.
I think this response conflates identity with content, which is common in these discussions, but I want to be sure we separate these things. First of all, it is absolutely true that ActivityPub relies on DNS as its identity system. I personally think we can take better advantage of that [1], but that’s a core aspect of the network. If your instance loses ownership over its domain, your identity is lost, unless you can issue a Move activity before that happens.
In the case of queer.af, many users didMove to other instances, preserving most or all of their social graph automatically. In theory - and that’s the realm we’re operating in, because nobody has yet migrated the entire userbase of a PDS under adverse conditions - Erin could have spun up a new instance at queeraf.othertld, generated profiles and accounts for all the users from queer.af who hadn’t yet Moved, and issued Move activities for their accounts, moving everyone there and preserving their identities. In this way, ActivityPub and ATProto provide a similar level of protection for identity and social graphs.
Where they differ is in trust. In ATProto, your DID is tied to either the DNS system or BlueSky-the-company (and, in future, maybe to key material you create and own). In ActivityPub, your identity is tied to the instance you’re using. ActivityPub defends you better against a single company being able to destroy your identity; ATProto defends you better against a compromised or destroyed PDS.
So much for identity; on to content. You assert:
While some implementations (like honk) readily permit importing posts from data exports, these can’t fully assume the old posts’ identities and are unable to migrate likes/boosts/replies from other servers.
This is true, but I don’t think it’s really… that important? There are two cases where this matters: metrics and links.
Let’s discuss links first. In both ATProto and ActivityPub-as-implemented, links are mostly tied to a particular service, and can’t move around if that service goes down. In ATProto, that’s the Application, [3] whereas in ActivityPub, the data’s owner (like the PDS) and the canonical web view into it are bundled. [2] In both cases, if the service fronting the links goes down, the links become ~impossible to access, as time tends towards infinity; neither design really solves this problem.
For metrics, it’s certainly true that ATProto solves this problem where ActivityPub doesn’t. I think this is an example of a tradeoff that ActivityPub makes fairly intentionally; it’s not possible to reconstruct the like count of a post from observing the network, because that would make likes public, which is not acceptable in AP’s privacy model. I prefer AP’s solution; other people prefer ATProto’s solution; that’s fine.
But there’s another point, and it’s the main thing I don’t think the ATProto folks have really thought through. There’s actually no technical reason you couldn’t reconstitute posts as you move an account from one ActivityPub server to the next; as you say, honk permits this. My assertion in the GP was that the reason Mastodon, Akkoma, and GoToSocial don’t do this is because it makes moderation hard. Your response was:
Where illegal content is uploaded to a PDS as a blob, the PDS can refuse to serve the blob without otherwise manipulating the user’s repository (4). Blobs can be taken down by PDS admins through the com.atproto.admin.updateSubjectStatus XRPC method, eventually landing in the ModerationService’s takedownBlob.
This is, of course, true of ActivityPub as well. Taking down posts, or just media from posts, is not difficult on existing ActivityPub servers. One could imagine building automated workflows to do this during account content moves, even. But that is difficult; either it’s a time-consuming and soul-draining manual process of checking possibly thousands of posts for content that is illegal or against the {instance, PDS’s} rules, or an extremely error-prone automated one with user frustration and possible legal consequences if your automation fails.
At a protocol functionality level, the only difference between ActivityPub and ATProto here is that ATProto preserves the ability to reconstitute post metrics across PDS moves, which is, frankly, a pretty small win, in my opinion, for the resulting privacy issues, as well as putting the vast majority of the userbase’s “distributed” identities (as well as the permission to stand up your own PDS!) in the hands of a venture capital-backed startup.
1: https://github.com/mastodon/mastodon/issues/24760
2: That said, even in ActivityPub, it’s possible to access a post on the web through a third party; In the queer.af case, many queer.af posts remain cached on other instances, and looking them up by their previous unique ID (their URL) works fine, just as it would in ATProto.
3: I anticipate a protestation that the Application link would still work if the PDS goes down or moves. That’s true, but it only matters if we imagine that Application servers are more stable, over all, than PDSes. Why would that be the case?
the reason Mastodon, Akkoma, and GoToSocial don’t do this
Another reason is that they’re generally hard-coded around a “post arrives, send post out” workflow - backfilling would require logic around “if this post is backdated[1], do not send it out”. It’s not impossible[2] but it does require a whole bunch of “who can do this?”, “what are the constraints?” etc. considerations.
[1] Which the MastoAPI doesn’t provide support for anyway.
[2] I locally-patched my Akkoma to do this when backfilling 10 years of Twitter bot content.
That’s true. As you say, though, this is not an inherent limitation, just something most people don’t really care to change, since they don’t want to import posts anyway.
I understand this perspective, but I think it only really works if you see the current state as an end point, and not a start. I haven’t heard the devs literally ever invoke Pixiv, but the Japanese contingent on BlueSky is large. They’ve demonstrated that they understand issues like revenge porn, CSAM, and aren’t interested in trying to make such content available forever. This stuff isn’t simple in a federated system.
I also think that this line about the did:plc scheme is a bit disingenuous:
I don’t intend to be disingenuous. I hear you that like, a lot of folks who are bsky positive tend to think about it aspirationally, that is, what it could be, and not what it is today. I think that early on in life, that stance makes sense, but also appreciate that “wait and see” isn’t good enough for everyone. That’s fine.
To me, when I think about systems, I want to know what futures they make possible, and which they make impossible. This specific decision doesn’t preclude other possibilities, if you dislike how did:plc works, and that’s important, to me at least. And I have seen the team act in repeatedly good ways, and so I tend to view them charitably, wheras some people are just inherently suspicious. It’s up to them to prove that our good faith is well founded. All I meant was that it’s fine with me if you don’t want to hear it and prefer to evaluate it more harshly.
I think you know I respect you a lot. The way you describe ATProto and the things you see in its future are really exciting, and I want to share your optimistic vision. But to me, it feels like a lot of people - yourself included - are taking an optimistic view of ATProto despite having had a historically pessimistic view of, especially, ActivityPub.
ActivityPub has problems, but it also does a lot of really amazing things without any money and while being dramatically less centralized than BlueSky is in practice. I wish we could see what ActivityPub would look like if cutting-edge developers like Tobi were supported by an 8 million dollar seed round, rather than a bunch of nerds giving them a $60 a month. I wish we could see what Mastodon itself could look like if it had 8 million dollars to start out with rather than, at a stretch, two million Euros ever.
In particular, BlueSky is marketing itself in an inherently disingenuous way. They published a paper about ATProto being “Usable Decentralized Social Media” before it was meaningfully decentralized; it still isn’t, really, since it’s not possible to run your own PDS without Bluesky-the-company’s blessing, or your own Relay. That makes me pretty skeptical that the BlueSky team is going to do a lot of work to make it truly decentralized, and I freely admit, a little bitter. We’ve been doing decentralized social media in the {StatusNet, OStatus, ActivityPub, …} network for, depending on how you figure it, a decade; why does this half-baked Twitter clone get the spotlight?
There are many problems with ActivityPub; but for all its faults, ActivityPub was born decentralized. It has dozens of server implementations, of which perhaps a dozen are usable in production, and hundreds of clients. I myself use a non-Fediverse ActivityPub based network, as well as having an account on the wider Fediverse, using the same software to the same ends but without providing Mastodon gGmbH anything, even content. That’s simply not possible with ATProto and its related software right now. I also don’t see anything in the way of implementation efforts for third-party PDS and relay software (though it’s possible that they just exist outside my bubble.)
The OP perpetuates the same issue elsewhere, too:
There’s no “BlueSky server,” there’s just servers running atproto distributing messages to each other, both BlueSky messages and whatever other messages from whatever other applications people create.
As far as I understand, that’s not true, unless we also say there is “no Facebook server” because facebook.com is a distributed system, or that there was no WhatsApp server when they used XMPP internally. There is a BlueSky server - it’s the Relay and the PDSes they own, and they control access completely.
This [feeds] to me is one of the killer features of BlueSky over other microblogging tools: total user choice. If I want to make my own algorithm, I can do so. And I can share them easily with others. If you use BlueSky, you can visit any of those feeds and follow them too.
Total user choice - as long as you’re not choosing to run a PDS Bluesky-the-company doesn’t want to federate with. Even beyond that, it’s hard for me to imagine that in a future with, say, thousands of PDSes and hundreds or dozens of Relays, no Application blocks a Relay, and no Relay blocks a PDS. If the response is that this is solved by combining the feeds of multiple Relays, well, that’s entirely possible in other microblogging tools, and it’s not widely implemented in ATProto yet either, because it doesn’t need to be, because… it’s all run by one company.
If what you really want is total user choice, I don’t think we can stop at federation; I think we have to build truly P2P social media. ActivityPub ain’t that, but neither is Bluesky.
I do, and I appreciate the length. I feel we’re going back and forth here, so I’ll probably drop this thread after this, but I do think there’s one part here that’s illustrative of this dichotomy we’re on the opposite ends of:
it still isn’t, really, since it’s not possible to run your own PDS without Bluesky-the-company’s blessing, or your own Relay.
I can see that. From my perspective:
The team says “here’s our design for federation, it’ll come, but it’s difficult.”
The team refactors the internal codebase to run a database per user, in preparation for real federation
The team runs their own servers, tests out federation.
The team opens up federation, with the above caveats.
We are here. Which leads you to say
That makes me pretty skeptical that the BlueSky team is going to do a lot of work to make it truly decentralized
But to me, what I see, is (these are their direct words): “Here, in this first phase of federation, you can file a request for your self-hosted PDS to be crawled and added to the production federated network. This is an early access phase as federation rolls out. In the next phase, you will not need to file a request through Bluesky.” I also see a team who has consistently promised to move towards openness. One of their mottos is “the company is a future adversary.” They are executing a plan, and continue to execute on that plan.
So, that’s why I’m not with you here: it feels like you’re reading bad faith into a slow and thoughtful rollout process of a core feature of the entire project.
That said, I do think your point about future vs present is apt, and I’m going to be thinking about it for a while. For me though, the present of Mastodon is not fit for purpose, and that’s maybe why I am so forward looking.
(oh, one last thing: “I also don’t see anything in the way of implementation efforts for third-party PDS and relay software (though it’s possible that they just exist outside my bubble.” I don’t yet either; I think that’s because the API isn’t considered to be set in stone just yet. I personally want to reimplement everything, but don’t know if I have the energy or time, and so I don’t want to begin until things are more settled. I suspect many others are in the same boat.)
Yeah, I don’t intend to make this a further back and forth. I appreciate you reading and responding to my wall of nonsense. I genuinely do hope you’re right, but I deeply fear that you’re wrong and that we’ll be stuck with a centralized service with decentralized trappings.
I would also encourage you to take that forward-looking perspective into your personal evaluations of non-Mastodon ActivityPub software. There is a lot of work going on in that space, in a lot of different directions.
I wish we could see what ActivityPub would look like if cutting-edge developers like Tobi were supported by an 8 million dollar seed round, rather than a bunch of nerds giving them a $60 a month. I wish we could see what Mastodon itself could look like if it had 8 million dollars to start out with rather than, at a stretch, two million Euros ever.
I generally agree with your logic but I think this part overstates the value of money. I think 1 personally-motivated programmer beats 10 money-motivated programmers in productivity and quality of output the majority of the time. I think anyone who is personally-motivated to work on non-centralized micro-blogging platforms would be happy with a modest pay, so long as they can live in comfort and with security.
I think we have to build truly P2P social media.
What do you mean exactly by P2P social media? I know your intent is something non-federated but what does that look like in practice? Do you mean every user has a private key and the client is all you need to disseminate messages?
I think anyone who is personally-motivated to work on non-centralized micro-blogging platforms would be happy with a modest pay, so long as they can live in comfort and with security.
Yep, absolutely agreed. I can give you names of about a dozen people I’d hire if Nora’s Acme ActivityPub LLC got funding today, and I’d bet you anything that we’d eclipse BlueSky in a year or less.
What do you mean exactly by P2P social media?
Honestly? I don’t know. I’ve written a bit about it in the past, but the core of my design indecision is that most people - even the kind of people who are willing to self-host an ActivityPub server - don’t want to do PKI, and can’t be relied upon not to fuck it up. I say this as someone who semi-regularly uses PGP-encrypted email.
I do have a vision for a truly P2P design that would probably work pretty well if the RSA fairy came along and solved PKI for us, though.
Amazing post! I’ve always felt that the official docs for commit signing were almost intentionally leaving out details about how revocation is supposed to work. Seems the answer is that it doesn’t really work!
Hey ~quad, thanks for your time reading through this and commenting.
I’m surprised you’d rather the post be even longer than it already is :p
Jokes aside, I did demonstrate using git-verify-commit to locally verify Gitsign commits.
I didn’t think it was worthwhile to demonstrate similar for GPG or SSH since it’d rapidly become a discussion on key distribution (as ~snej notes, really the disappointing thing here is PKI).
I already noted my frustrations with GPG’s web of trust and S/MIME, but also alluded to similar trouble with gpg.ssh.allowedSignersFile for SSH keys. We could query GitHub’s REST API for known engineers’ signing keys and construct an allowedSignersFile, but validating old commits remains difficult without a mechanism to communicate key revocation.
At least git’s PKI agnosticism is likely a contributor to Gitsign’s early technical feasibility :)
I love a long post, especially when it’s straight to the point with excellent examples like yours! 🥰
You inspired me to dig further into Sigstore and Rekor. At first blush, it seems wrong that commits are going in the CT log and I want to understand what I’m missing!
I suppose the proposed usage matches up with the intent behind existing usage of HTTP status code 420 to signal a rate limit has been exceeded, albeit often labelled “Enhance Your Calm” (popularised by the Twitter REST API)- in a way, a client that misbehaves when faced with rate limiting could be considered an “impaired requester.” :)
Ironically, I learnt about ECH because our corporate Palo alto firewall blocked it by default.
There’s one thing I fail to understand however. Cloudflare says they’ll treat any request with SNI “cloudflare-ech.com” as ECH, but how is the client supposed to send that SNI in the first place ? If I want to reach “randombits.tld”, how do I know that I must use cloudflare’s ECH as the outer SNI ? Is there some magic DNS trick that’s not mentioned in the docs ?
Was wondering the same… there’s an
older Cloudflare blog from 2020 that notes reliance on a HTTPS Resource Record for ECH configuration. Their developer docs also suggests corporate networks can break ECH by manipulating/dropping these DNS records.
API keys are used to secure the highest-stakes APIs that exist today — all of AWS’s services, for example. Yet while API keys seem to be considered an entirely reasonable and industry standard design approach, passwords are now considered the unwelcome black sheep whose role as a sufficient criterion for authentication is viewed with increasing dubiousness.
Since the user-specified password functionality is now seemingly so distrusted as a widespread industry practice, it raises the question of why not just either use only TOTP for login, or issue a password in the same way that TOTP secrets are issued: randomly and non-customisably.
Does anyone have an explanation of what this means? If I understand correctly (guessing at many of the words, from context), this is an IRC host, disabling a thing that lets you transparently project IRC channels as if they were Matrix things and instead requiring you to explicitly configure this? Presumably this is because a lot of people were able to send spam via Matrix to Matrix to IRC and the IRC server had no recourse other than to ban everyone on the closest Matrix hop?
I think your summary is pretty close (Matrix effectively operates an IRC bouncer for portalled rooms)- there’s also a post from Matrix themselves at https://matrix.org/blog/2023/07/deportalling-libera-chat/ which goes into more detail
Really appreciate the author’s thinking here around psychological safety empowering teams to make decisions that are flexible to change in the future- it reminds me of the practices described in https://kind.engineering/
Since a large part of your critique is focused on signatures signed by outdated keys, it occurs to me that implies that a secure use of public signatures would be to remember all the signatures you’ve made, and periodically update them, even if nothing about the software has changed.
I’m not sure that substituting minisign/ssh whatever the preferred signature tool du jour would make a difference in this regard; this is a shortcoming of build infrastructure.
I understand this is part of the reasoning behind Rekor within Sigstore- a compromised key (due to old algos or leaks) shouldn’t be capable of creating unwanted signatures without being easily detectable.
Admittedly, Sigstore’s Fulcio only issuing keys valid for 10min means meaningful key compromise is far less likely than using long-lived PGP/SSH/minisign keys (you’d hopefully not request a certificate with an algorithm weak enough to be crackable within 10min anyway ^^;).
Personally, I think SSHFP is a better way to solve the TOFU-problem with SSH, rather than requiring every SSH server to also run a webserver, and announcing their existence via the CT-log (a consequence of requesting a WebPKI certificate).
Hi! Thank you for the feedback. I agree that DNS would be the perfect place for host key fingerprints.
It is possible to configure resolv.conf so that ssh can get that bit of information. How often is that done? And that is not the only problem of DNSSEC.
imo CT logs being mandatory aren’t a bad thing, they allow you to be certain no one has created a certificate for your domain that would be accepted by browsers. You could avoid having particular servers/subdomains be identifiable by issuing each a wildcard certificate (LetsEncrypt even supports free wildcard certs through a DNS challenge).
With DNSSEC, it’s hard to evidence a registrar/the TLD operator hasn’t temporarily changed your DNSSEC keys without constant DNS monitoring. This is particularly worrying considering DNSSEC infrastructure is mostly controlled by world governments.
I’m not entirely sure I understand your point. You correctly describe why CT (“report all certificate issuances to Google”) is needed to keep WebPKI in check, but the question at hand is if it’s wise to use WebPKI for SSH, which it is not because it would require you to announce your SSH server before you’ve had a chance to set it up correctly.
I don’t know about you, but when I install a new box, I want to keep it off the internet (block incoming traffic except my own, keep it out of DNS) until I’ve set it up entirely. But I can’t do that anymore if it needs an incoming port 80 to the internet to do a little ACME song and dance to be recorded into CT before I can login to it, and I have just announced to the world that I just set up a new server, inviting everyone to start probing whether I did at least configure my firewall correctly.
Because now I suddenly have to run an internet facing webserver before I run SSH (or I need to somehow let this new machine write stuff in its DNS zone, which is hard without an MDM solution in my home lab), and if ACME fails (I didn’t set up DNS, the machine did too many attempts) I’m locked out until Let’s Encrypt lets me in into my own machine again. Not to mention how can you get a certificate for a mobile device, such as a laptop, that is in different networks and might thus not have a static name.
Or is all of this optional, because you can also login without the WebPKI bit? Then an attacker would simply need to block port 443 from your client to the SSH server, and you’re back to TOFU (which according to OP, isn’t good enough). An attacker may even be able to DoS you by dropping the traffic, making your client wait for a long timeout.
first curl https://host.domain.tld/.well-known/ssh/host.domain.tld,
then curl https://domain.tld/.well-known/ssh/host.domain.tld,
and then? curl https://tld/.well-known/ssh/host.domain.tld?
So for this “bubbling up” to make sense, you suddenly need to involve the public suffix list from Mozilla, to know where you need to stop (you could hardcode stopping at two elements, but then you would still attempt https://co.uk/…)
And it still doesn’t solve the issue, how do you handle errors? When do you decide to abort, or bubble up? If the HTTPS connection times out? If it answers 404? If it answers something else than 200 or 404? How long do you wait for an answer? Is that timeout per server or for the whole process?
I don’t see how any telemetry transmitted via the internet that is opt-out is not a direct violation of the GDPR. The IP address that is transmitted with it (in the IP packets) is protected information that you don’t have consent to collect - you failed at step 0 and broke the law before you even received the bits you actually care about.
Of course, the GDPR seems to be going routinely unenforced except against the largest and most blatant violations, but I really don’t see why a company like google would risk it. Why other large companies are actively risking it.
My understanding of the GDPR was that IP addresses are not automatically PII. Even in situations where they are, simply receiving a connection from an IP address does not incur any responsibilities because you require the IP for technical reasons to maintain the connection. It’s only when you record the IP address that it may hit issues. You can generally use some fairly simple differential privacy features to manage this (e.g. drop one of the bytes from your log).
(30) Natural persons may be associated with online identifiers provided by their devices, applications, tools and protocols, such as internet protocol addresses, cookie identifiers or other identifiers such as radio frequency identification tags. This may leave traces which, in particular when combined with unique identifiers and other information received by the servers, may be used to create profiles of the natural persons and identify them.
This doesn’t actually say that collecting IP addresses is not allowed. It only states that when the natural person is known, online identifiers could be used to create profiles.
Furthermore this is only relevant if those online identifiers are actually processed and stored. According to the Google proposal they are not. They only keep record of the anonymous counters. Which is 100% fine with GDPR.
It’s a shame the go compiler isn’t well positioned UX-wise to ask users for opt-in consent at installation (as an IDE might) since that’d likely solve privacy concerns while reaching folk that don’t know about an opt-in config flag.
Yes IP addresses are not automatically PII, but if you can’t enforce they are not you must assume they are. The telemetry data itself is probably not PII, because it’s anonymized.
GDPR prohibits processing[0] of (private) data, but contains some exceptions. The most common used one is to full fill a contract (this doesn’t need to be a written down contract with payment). So assume you have an online shop. A user orders i.e. a printer you need his address to send the printer to him. But when the user orders a ebook you don’t need the address because you don’t need to ship the ebook. In the case of go the service would be compiling go code. I don’t see a technical requirement to send google your IP-Address.
Next common exception is some requirement by other law (i.e. tax-law or money laundering protection law). I think there is none.
Next one is user consents: You know these annoying cookie banner. Consents must be explicit and can’t be assumed (and dark pattern are prohibit). So this requires an opt-in.
Next one would be legitimate interest. This is more or less the log file exception. Here you might argue that the go team needs this data to improve there compiler. I don’t think this would stand, because other compiler work pretty well without telemetry.
So all together I[1] would say the only legal way to collect the telemetry data is some sort of user consent.
[0] Yes processing not only storing, so having a web server answering http requests might also falls under GDPR.
You are wrong. The GDPR is not some magic checkbox that says “do not ever send telemetry”. The GDPR cares about PII and your IP address and a bunch of anonymous counters are simply not PII. There is nothing to enforce in this case.
And what, exactly, is so wrong about MitM yourself, on your own network? Have we been so gaslit by “security specialists” that doing so on our own equipment is considered unthinkable? Or am I just an old man yelling at clouds?
There’s lots of research on the prevalence of people screwing up TLS interception like this (I recently looked for some so my team at work would have ammunition for refusing to do so on work laptops, which we manage).
That being said there’s a lot going for this approach:
Go’s TLS library is probably pretty reasonable and is likely to prevent a lot of common footguns here - not passing on certificate validation failures from the upstream origin, etc.
You’re only doing it for one website, t.co, instead of generic TLS connections which significantly reduces attack surface (and complexity!).
You’re not doing this at scale/you’re probably not a target. Yeah someone could do Bad Things™ with your root CA certificate if they got onto your network, but on a typical home network you’ve got bigger problems then. So meh?
🤷 seems PROBABLY okayish even though it makes me sweat a little! Not that I am an expert.
Admittedly, if you’re only hijacking one hostname, you might as well self-sign an entity certificate for the target hostname and directly add it to your trust stores (without creating a self-signed CA).
There’s a standard that exists for that. I was party toimplementations of that, but I don’t think it got much traction on the internet at large. The easiest mainstream way is to certify it using a root that you control and add name constraints, but for that to be secure (in a general way) you need to own both CAs.
Not to say this is a bad solution, but what happens when your friends come over and ask to use your wifi? Presumably they haven’t installed your CA’s root cert. (Ignore for a moment the fact that obviously any TRUE friend would install their friend’s root cert.)
Anyway the benefits outweigh the downsides, but it’s something to think about.
A much better solution is to abolish t.co altogether, which is now a lot closer to happening than I would have dared to hope six months ago! I haven’t followed a t.co link in months, and with any luck never will again, but I understand others might not be so fortunate at this time.
For my situation I actually don’t have Adguard as the DNS resolver on my router, mainly because I’ve never been able to get it to work, so I just update the DNS manually on devices instead - so friends and family won’t be affected (but they wouldn’t be able to use this tool) unless they specifically set the DNS resolver on their phones.
It’s a fair point though, I guess creating an isolated VLAN/guest network for guests would be another way around this.
Another valid reply is that t.co is inherently sketchy as hell, and getting a warning when you’re accessing it isn’t necessarily a bad thing. (But it would be better if the warning were clearer about the specific problem.)
n.b. Passkey is a generic term for FIDO/WebAuthn credentials, which PyPI’s 2FA supports in addition to TOTP. PyPI also require you to record a set of recovery codes and ask you to recite a code back during their 2FA setup process.
It seems fears around recovery/device migration are a significant part
of the rationale behind Apple’s passkeys implementation requiring iCloud
Speaking as someone who worked in the hosting biz and had to deal with
this stuff, fears around recovery and device migration are all too
legitimate. “I lost my 2FA” was one of my most-loathed support
requests. Usually it was “I used the authenticator app on my old phone
and forgot to migrate”.
As the article hints at, what makes MFA really viable is the hidden
factor: human-to-human / human-to-organization relationships. Social
relationships, not technical ones.
I’m also not comfortable with $bigtech_corp setting itself up as a
trusted intermediary for the same reason. $bigtech_corp tends to be all
about lack of accountability and destroying legitimate social
relationships.
I have questions not answers, problems not solutions.
Usually it was “I used the authenticator app on my old phone and forgot to migrate”.
Or “my old phone is now toast and I forgot the authenticator was there and there goes all my access”
Thankfully I had my core device codes backed up, but some stuff I just had to write off to no longer having access to because there wasn’t a support team to engage.
I moved phones several years ago and had some but not all TFA codes migrate. Fortunately I noticed before I sent the old phone to recycling but jeez why was that a possible failure mode? All or none, ffs.
That recovery thing was my biggest concern when getting my old SE repaired and the upgrade to the 13. It went well though, but I always think about those things.
My guess would be a genuine feeling that it’s not good for EU people that an American advertising company, an American browser vendor, an American computer company, and an American software company functionally control who’s allowed to issue acceptable certificates worldwide.
Sure, but then the answer is that the EU should make Mozilla open an office in Brussels or somewhere and then shovel money at FireFox, so that they have their own player in the browser wars. Tons of problems are created for users by the fact that Google and Apple have perverse incentives for their browser (and that Mozilla’s incentive is just to figure out some source, any source of funding). Funding Mozilla directly would give EU citizens a voice in the browser wars and provide an important counterbalance to the American browsers.
On the other side, passing laws that require compliance from foreign firms operating in the EU has been successful; for as much as it sucks and is annoying to both comply with and use websites that claim to comply with it, the GDPR has been mostly complied with.
A) In an EU context, it’s hard to argue that Aerobus hasn’t been successful for promoting European values. If the WTO disagrees, that’s because the WTO’s job is not to promote European values. I can’t really imagine how Google or Apple could win a lawsuit against the EU for funding a browser since they give their browsers away for free, but anyone can file a lawsuit about anything, I suppose.
B) I don’t see how anyone can spend all day clicking through pointless banners and argue that the current regulatory approach is successfully promoting EU values. The current approach sucks and is not successful. Arguably China did more to promote its Chinese values with Tiktok than all the cookie banners of the last six years have done for the EU’s goals.
The EU government’s goal for Airbus is to take money from the rest of the world and put it in European paychecks.
The goal of the GDPR is to allow people in Europe a level of consent and control over how private surveillance systems watch them. The GDPR isn’t just the cookie banners; it’s the idea that you can get your shit out of facebook and get your shit off facebook, and that facebook will face consequences when it comes to light that they’ve fucked that up.
Google could absolutely come up with a lawsuit if the EU subsidizes Mozilla enough to let Mozilla disentangle from Google and start attacking Google’s business by implementing the same privacy features that Apple does.
A trusted and secure European e-ID - Regulation, linked to in the article’s opening, is a revision of existing eIDAS regulation aiming to facilitate interoperable eID schemes in Member States. eIDAS is heavily reliant on X.509 (often through smartcards in national ID cards) to provide a cryptographic identity.
The EU’s interest in browser Certificate Authorities stems from the following objective in the draft regulation:
They should recognise and display Qualified certificates for website
authentication to provide a high level of assurance, allowing website owners to assert
their identity as owners of a website and users to identify the website owners with a
high degree of certainty.
… to be implemented through a replacement to Article 45:
Qualified certificates for website authentication referred to in paragraph 1 shall
be recognised by web-browsers. For those purposes web-browsers shall ensure
that the identity data provided using any of the methods is displayed in a user
friendly manner.
Mozilla’s November 2021 eIDAS Position Paper, also linked in the original article, goes into more detail about the incompatibilities with the ‘Qualified Website Authentication Certificates’ scheme and the CA/Browser Forum’s policies.
I live in France, and a number of vaccines are already mandatory (for obvious public health reasons).
I’ve never had to present a proof of vaccination when I go to the theatre. Or Theme park. Or anywhere within my country for that matter. Even for international travel, didn’t need to give the USA such proof when I came to see the total solar eclipse in 2019. I’ve also never had to disclose the date of my vaccines, or any information about my health.
What you call “all manner of situation” is actually very narrow. This certificate is something new. A precedent.
and a number of vaccines are already mandatory (for obvious public health reasons).
This is why you’ve not been asked for proof for international travel, since it’s assumed that you’ll have received these immunisations or be unexposed through herd immunity as someone who resides in France.
We’re currently in a migration period where some people are immunised and others aren’t. We’ve had this happen before– the WHO is responsible for coordinating the Carte Jaune standard (first enforced on 1 August 1935) to aid with information sharing, but they haven’t extended it to include COVID-19 immunisation yet.
(Note: international travel is one use case where I believe it’s perfectly legitimate to ask for a evidence of vaccination. It’s the only way a country can make sure it won’t get some public health problems on its hand, which makes it a matter of sovereignty.)
It’s not the government that’s sharing this information. It’s you when you present that QR code. This is equivalent to your doctor printing out a piece of your medical records and handing it to you. You can do whatever the hell you want with that piece. It’s your medical history. If you want to show it to someone, you can. If you don’t want to show it to someone, you can. The government only issues the pass. Nothing more.
The QR code has a very important difference with a piece of paper one would look at: its contents are trivially recorded. A piece of paper on the other hand is quickly be forgotten.
This is equivalent to your doctor printing out a piece of your medical records and handing it to you.
No, this is equivalent to me printing out a piece of my medical record and handing it to the guard at the entrance of the theatre. And I’m giving them way more than what they need to know. They only need a cryptographic certificate with an expiration date, and I’m giving them when I got my shot or whether I’ve been naturally infected. I can already see insurance companies buying data from security companies.
You can do whatever the hell you want with that piece. It’s your medical history.
There’s a significant difference between the US and the EU here, that is worth emphasising. In the US, your personal information, (such as your medical history) is kind of your property. You can give it or sell it and all sorts of things. In the EU however your personal information is a part of you, and as such is less alienable than your property. I personally align with the EU more that the US on this one, because things that describes you can be used to influence, manipulate, and in some case persecute you.
If you want to show it to someone, you can. If you don’t want to show it to someone, you can.
Do I really have that choice? Can I really chose not to show my medical history if it means not showing up at the theatre or any form of crowded entertainment ever? Here’s another one: could you actually chose not to carry a tracking device with you nearly at all times? Can you live with the consequences of no longer owning a cell phone?
If you carry a tracking device with you at all times, why do you care about sharing your vaccination status? And why should someone medically unable to be vaccinated care about your privacy when their life is at risk?
As someone who’s father is immunocompromised, and with a dear friend who could not receive the vaccine due to a blood disease, fuck off. People have died.
Since you’re forcing my hand, know that I received my first injection not long ago, and have my appointment for the second one. Since I have good health, I don’t mind sharing too much.
What I do mind is that your father and dear friend have to share their information. Your father will likely need more than 2 injections. If it’s written, we can suspect immunocompromission. Your friend will be exempt. If it’s written, we can suspect some illness. That makes them vulnerable, and I don’t want that. They may not want that.
Now let’s say we do need that certificate. Because yes, I am willing to give up a sliver of liberty for the health of us all. The certificate only needs 3 things:
Information that can be linked to your ID (some number, your name…)
An expiration date.
A cryptographic certificate from the government.
That’s it. People reading the QR-code can automatically know whether you’re clear or not, and they don’t need to know why.
If you carry a tracking device with you at all times, why do you care about sharing your vaccination status?
I do not carry that device by choice. The social expectation that people can call me at any time is too strong. I’m as hooked as any junkie now.
I am willing to give up a sliver of liberty for the health of us all.
I appreciate your willingness, your previous comments made me think you weren’t. I apologize for my hostility. I think we can agree we should strive to uphold privacy to the utmost, but not at the expense of lives.
That’s it. People reading the QR-code can automatically know whether you’re clear or not, and they don’t need to know why.
That’s true, and that system would be more secure. But the additional detail could provide utility that outweighs that concern.
I can already see insurance companies buying data from security companies.
Insurance companies already have access to your medical history in the US. Equitable health care is an ongoing struggle here. ¯\_(ツ)_/¯
Edit: I removed parts about US law that could be incorrect, as IANAL.
HIPAA states PHI (personal health information) cannot be viewed by anyone without a need to know that information, and information systems should never even allow unauthorized persons to view that information in the first place. Device or software that displayed PHI to a movie theatre clerk would never go to market because it would never pass HIPAA compliance.
Damn it, no, this is incredibly wrong.
HIPAA applies to covered entities and business associates only. Covered entities are health care providers, insurance plans, and clearinghouses/HIEs. Business associates are companies that provide services to covered entities – so if you are an independent medical coder that reads doctor notes and assigns ICD10 codes, you’re covered because you provide services to a covered entity. How do you know if you’re a business associate? You’ve signed a BAA.
Movie theaters are not covered entities, and are not business associates. HIPAA has zero bearing on what they do. Your movie theater clerk could absolutely mandate you share your vaccination status – just like your doughnut vendor can ask in exchange for a free doughnut.
Your movie theater clerk could absolutely mandate you share your vaccination status
Yeah. As the movie theater is private property, and “unvaccinated” isn’t a protected group, they are allowed to discriminate all they want.
But I admit I am surprised they’d legally be able to store and sell your medical records. It seems you’re correct, and I had incorrectly generalized my experience and knowledge dealing with other covered entities all day to non-covered entities. A classic blunder of a programmer speaking about law, whoops. I’ve cut those statements from my prior comment.
I still don’t think that vaccination information would be any news to insurance companies, but I’m yet again disappointed by US privacy law.
Yeah. As the movie theater is private property, and “unvaccinated” isn’t a protected group, they are allowed to discriminate all they want.
It is conceivable you could make an ADA argument here – “I can’t get a COVID vaccination due to a medical condition; therefore, you need to provide a reasonable accommodation to me”. But that’s maybe a stretch, I’m not sure.
But I admit I am surprised they’d legally be able to store and sell your medical records
I think a lot of this comes down to training about HIPAA. If you’re in-scope for HIPAA, many places (rightfully) treat PHI as radioactive and communicate that to employees. And there’s very little risk in overstating the risk around mishandling PHI - it’s far safer to overmessage the dangers to people who work with it.
Indeed, until I needed to get involved on the compliance side – after all, somebody has to quote HITRUST controls for RFPs – I overfit HIPAA as well.
I’m yet again disappointed by US privacy law.
If you want to feel marginally better, go read up on 42 CFR Part 2. It still only applies to covered entities but it offers real, meaningful protections to an especially vulnerable population: people seeking treatment for substance use disorder. It also makes restrictions around HIPAA data handling look trivial.
But the additional detail could provide utility that outweighs that concern.
Possibly. That would need to be studied and justified, I believe.
Furthermore any reader of these QR codes should only return a pass/fail result, […]
Actually that’s what I expect from official programs, including in France. The problem is the QR code itself: any program can read it, and it’s too easy (and therefore tempting) to write or use a program that displays (or record!) everything.
HIPAA laws are some of the few here that have teeth
Hmm, that less horrible than I thought then. Glad to hear it.
Hmm, that less horrible than I thought then. Glad to hear it.
As @owen points out, IANAL and these laws don’t apply in this circumstance. I still don’t think that vaccination information would be any news to insurance companies, but I’m yet again disappointed by US privacy law.
Just for fun, the contents of the EU covid cert have a much more concise-looking schema than the US one (less XML-y deep structure and magic URLs). And the European container seems to be CBOR + Base45 vs. the US one JSON base64’d then run through a transform that doubles byte count turning everything into decimal digits. Both use gzip. (Ed: turns out QR codes have a numeric encoding that makes three decimal digits only take ten bits, so the US way is transmitting 6 bits in 6 and 2/3 bits on average, ~90% efficient. And Base45 gets 16 bits in three 5.5-bit chars, ~97% efficient. Now it all makes more sense!)
Interesting that both versions seem to fit in that size QR code (must just be able to hold a lot); I’d’ve thought even with gzip, everything in the US structure would be a tight fit.
Note that what the US one is using is a standardised interoperable healthcare format called FHIR.
The json representation looks pretty verbose, but handles many things you’d forget when coming up with your own format to represent healthcare data.
Just look at the FHIR R4 definition for HumanName in context of Patient
name HumanName 0..*: A person may have 0, 1, or more names
For each HumanName:
use {usual, temporary, official, nickname, maiden, ...}: The context of this HumanName; does this person use it as a nickname, is it the person’s maiden name, …
family string 0..1: May or may not have a family name
given string 0..*: 0 or more given names (usually surname)
period: 0..1 Period: The time period this name was/is/will be used
And this is just a small extract from just the HumanName data type. FHIR also has a system to manage logical IDs as well as external IDs (i.e. if a Patient is tracked in different databases in a hospital), support for various code systems used in healthcare (ICD-10, CPT, …), the most complex/complete system to handle temporal information I’ve seen, a super-integrated extension mechanism, …
The whole documentation, data schema definition and basically everything is also completely machine-readable.
It’s very complex, but I recommend everyone who does some sort of data modelling to take a look at some of the concepts. It’s a great inspiration.
Source: I’ve been working with FHIR for a few years now :-)
I always loved how Stripe’s REST API handles opaque IDs as a way to prevent confusion. While the Backwards-compatible changes documentation calls out “adding or removing fixed prefixes” as a backwards-compatible change, you’ll notice opaque IDs generated by Stripe usually include a short, human-readable prefix describing the ID. Some examples:
Publishable API key: pk_test_TYooMQauvdEDq54NiTphI7jx
You’re not meant to rely on these within your own code (I think some of the other suggestions in this post around strict type systems are far more applicable in that case), but they’re brilliant sanity-checks while running through a debugger’s stack view to make sure you’ve not accidentally referenced the wrong variable. Doubly so since Stripe’s documentation provides examples of the fixed prefixes for their API responses.
This is nice, and probably works well with the more “dynamic” languages used on the web. I wonder if they use this representation in the database as well, or if this is somehow “decoded” somewhere, and if it is, what they use as an internal representation.
tsk tsk should have used signal
For calendar syncing? For health data? For bookmarks? For contacts? Not sure I follow.
oh crap I thought this was about iMessage. I guess they are going after iCloud but iMessage will continue to be presented as E2EE for now?
Sort of- the E2EE guarantees go away if you have iCloud Backup and Messages in iCloud enabled on an account lacking ADP.
via https://support.apple.com/en-us/102651:
It’s eternally disappointing Apple don’t encourage enrolling in ADP during initial setup, considering they do encourage enrolling in both iCloud Backups and Messages in iCloud.
If you have a lot of these kinds of convos, I would highly recommend setting up explicit “on call” for team members to be in charge of handling external requests. Otherwise you can have motivated people be a bit ambitious with handling everything, get overloaded with firefighting, and end up just kinda exhausted. All while not sharing the institutional knowledge enough so they become a SPOF in an odd way.
Always good to make sure team members are taking steps forwards, of course. I just think that when people are doing this on rotation then it removes a bit of variability. Not a hard and fast rule of course.
In addition to an explicit ‘on call,’ there were two other practices at my former employer that I think advance this philosophy.
One, we had a policy that a customer could not be redirected more than three times. If you were last in the line, you had to hold the ticket to completion rather than redirect to another team. The one time I was the one holding the ticket, the client was seeing random errors throughout the product. As it turned out, they had a HTTP proxy on their side that was randomly failing requests (but only on certain domains), but the policy forced someone to fully investigate rather than keep on passing the buck once symptoms could be ascribed to a different team.
Secondly, as the company grew, we added an ‘engineer support’ role that could support the on-calls. They could handle long-term investigations and support jobs that were longer than a week, but not big or long enough to warrant an actual project.
Totally agree with your advice for an explicit “on call” during business-hours.
Crucially, moving support out of DMs and into public channels means others can search logs for advice on similar issues (and sometimes even answer their own questions!)
I wrote an internal bot a few years ago that syncs Slack user groups with $work’s on call scheduler. Folks can say
@myteam-oncallin a public channel and instantly reach the right person without overambitious members needing to be involved in triage. It’s also easy enough to say@friendlyteam-oncalland redirect folk in-place to another team without switching channels or losing context.My thing was to create a Slack action, where when it was done on an action it would:
Was excellent stuff IMO
For me, this document raises the “Pixiv problem”. That is to say, even assuming every PDS is okay with hosting any content they are legally permitted to, some content is legal in some jurisdictions and not others.
More specifically, from reading this and the ATProto docs, my understanding is that PDSes host (that is, both store and serve) a user’s repository (that is, their posts). Based on this section:
It seems to me that, in accepting a new user, a PDS accepts responsibility for hosing everything that user has ever posted, both legal terms and in terms of storage space and bandwidth. What if that user and their previous PDS were in a jurisdiction where, say, lolicon is legal, but the new PDS is not? This is the basic reason that most ActivityPub implementations don’t port posts upon receiving the
Moveactivity. How does BlueSky handle this case?Edit: I also think that this line about the
did:plcscheme is a bit disingenuous:I think the opinion of most folks who are bsky-shy is neither of these; or, rather, it’s an example of pragmatically shipping something and then acting like its replacement is already here. If we are going to evaluate BlueSky on how DIDs might work in the future, we also have to agree to evaulate other distributed solutions on their promises rather than their realities.
I think this is a stretch. Object identifiers in ActivityPub are either
nullor “publicly dereferencable URIs, such as HTTPS URIs, with their authority belonging to that of their originating server” (0).While some implementations (like honk) readily permit importing posts from data exports, these can’t fully assume the old posts’ identities and are unable to migrate likes/boosts/replies from other servers. Notably, ActivityPub’s assumption of invariant object identifiers prevented the recently shutdown
queer.afinstance from simply adopting a new domain outwith the.afTLD (1).Bluesky encodes attachments to posts as a
blobType (2): these aren’t directly stored in user repositories, instead just a reference to the blob’s CID is used (3).Where illegal content is uploaded to a PDS as a blob, the PDS can refuse to serve the blob without otherwise manipulating the user’s repository (4). Blobs can be taken down by PDS admins through the com.atproto.admin.updateSubjectStatus XRPC method, eventually landing in the ModerationService’s takedownBlob.
tl;dr: new PDS is empowered to enforce local laws by refusing to serve problematic blobs (or, if needed, the entire repository).
For better (~
queer.aftype situations) or worse (~lolicon), moving to a third PDS would allow a user to restore blobs that have been taken down by the second PDS by reuploading them bit-for-bit (from a backup, archive, …) while preserving the CID.I think this response conflates identity with content, which is common in these discussions, but I want to be sure we separate these things. First of all, it is absolutely true that ActivityPub relies on DNS as its identity system. I personally think we can take better advantage of that [1], but that’s a core aspect of the network. If your instance loses ownership over its domain, your identity is lost, unless you can issue a
Moveactivity before that happens.In the case of
queer.af, many users didMoveto other instances, preserving most or all of their social graph automatically. In theory - and that’s the realm we’re operating in, because nobody has yet migrated the entire userbase of a PDS under adverse conditions - Erin could have spun up a new instance atqueeraf.othertld, generated profiles and accounts for all the users fromqueer.afwho hadn’t yetMoved, and issuedMoveactivities for their accounts, moving everyone there and preserving their identities. In this way, ActivityPub and ATProto provide a similar level of protection for identity and social graphs.Where they differ is in trust. In ATProto, your DID is tied to either the DNS system or BlueSky-the-company (and, in future, maybe to key material you create and own). In ActivityPub, your identity is tied to the instance you’re using. ActivityPub defends you better against a single company being able to destroy your identity; ATProto defends you better against a compromised or destroyed PDS.
So much for identity; on to content. You assert:
This is true, but I don’t think it’s really… that important? There are two cases where this matters: metrics and links.
Let’s discuss links first. In both ATProto and ActivityPub-as-implemented, links are mostly tied to a particular service, and can’t move around if that service goes down. In ATProto, that’s the Application, [3] whereas in ActivityPub, the data’s owner (like the PDS) and the canonical web view into it are bundled. [2] In both cases, if the service fronting the links goes down, the links become ~impossible to access, as time tends towards infinity; neither design really solves this problem.
For metrics, it’s certainly true that ATProto solves this problem where ActivityPub doesn’t. I think this is an example of a tradeoff that ActivityPub makes fairly intentionally; it’s not possible to reconstruct the like count of a post from observing the network, because that would make likes public, which is not acceptable in AP’s privacy model. I prefer AP’s solution; other people prefer ATProto’s solution; that’s fine.
But there’s another point, and it’s the main thing I don’t think the ATProto folks have really thought through. There’s actually no technical reason you couldn’t reconstitute posts as you move an account from one ActivityPub server to the next; as you say,
honkpermits this. My assertion in the GP was that the reason Mastodon, Akkoma, and GoToSocial don’t do this is because it makes moderation hard. Your response was:This is, of course, true of ActivityPub as well. Taking down posts, or just media from posts, is not difficult on existing ActivityPub servers. One could imagine building automated workflows to do this during account content moves, even. But that is difficult; either it’s a time-consuming and soul-draining manual process of checking possibly thousands of posts for content that is illegal or against the {instance, PDS’s} rules, or an extremely error-prone automated one with user frustration and possible legal consequences if your automation fails.
At a protocol functionality level, the only difference between ActivityPub and ATProto here is that ATProto preserves the ability to reconstitute post metrics across PDS moves, which is, frankly, a pretty small win, in my opinion, for the resulting privacy issues, as well as putting the vast majority of the userbase’s “distributed” identities (as well as the permission to stand up your own PDS!) in the hands of a venture capital-backed startup.
1: https://github.com/mastodon/mastodon/issues/24760 2: That said, even in ActivityPub, it’s possible to access a post on the web through a third party; In the queer.af case, many queer.af posts remain cached on other instances, and looking them up by their previous unique ID (their URL) works fine, just as it would in ATProto. 3: I anticipate a protestation that the Application link would still work if the PDS goes down or moves. That’s true, but it only matters if we imagine that Application servers are more stable, over all, than PDSes. Why would that be the case?
Another reason is that they’re generally hard-coded around a “post arrives, send post out” workflow - backfilling would require logic around “if this post is backdated[1], do not send it out”. It’s not impossible[2] but it does require a whole bunch of “who can do this?”, “what are the constraints?” etc. considerations.
[1] Which the MastoAPI doesn’t provide support for anyway. [2] I locally-patched my Akkoma to do this when backfilling 10 years of Twitter bot content.
That’s true. As you say, though, this is not an inherent limitation, just something most people don’t really care to change, since they don’t want to import posts anyway.
I understand this perspective, but I think it only really works if you see the current state as an end point, and not a start. I haven’t heard the devs literally ever invoke Pixiv, but the Japanese contingent on BlueSky is large. They’ve demonstrated that they understand issues like revenge porn, CSAM, and aren’t interested in trying to make such content available forever. This stuff isn’t simple in a federated system.
I don’t intend to be disingenuous. I hear you that like, a lot of folks who are bsky positive tend to think about it aspirationally, that is, what it could be, and not what it is today. I think that early on in life, that stance makes sense, but also appreciate that “wait and see” isn’t good enough for everyone. That’s fine.
To me, when I think about systems, I want to know what futures they make possible, and which they make impossible. This specific decision doesn’t preclude other possibilities, if you dislike how
did:plcworks, and that’s important, to me at least. And I have seen the team act in repeatedly good ways, and so I tend to view them charitably, wheras some people are just inherently suspicious. It’s up to them to prove that our good faith is well founded. All I meant was that it’s fine with me if you don’t want to hear it and prefer to evaluate it more harshly.I think you know I respect you a lot. The way you describe ATProto and the things you see in its future are really exciting, and I want to share your optimistic vision. But to me, it feels like a lot of people - yourself included - are taking an optimistic view of ATProto despite having had a historically pessimistic view of, especially, ActivityPub.
ActivityPub has problems, but it also does a lot of really amazing things without any money and while being dramatically less centralized than BlueSky is in practice. I wish we could see what ActivityPub would look like if cutting-edge developers like Tobi were supported by an 8 million dollar seed round, rather than a bunch of nerds giving them a $60 a month. I wish we could see what Mastodon itself could look like if it had 8 million dollars to start out with rather than, at a stretch, two million Euros ever.
In particular, BlueSky is marketing itself in an inherently disingenuous way. They published a paper about ATProto being “Usable Decentralized Social Media” before it was meaningfully decentralized; it still isn’t, really, since it’s not possible to run your own PDS without Bluesky-the-company’s blessing, or your own Relay. That makes me pretty skeptical that the BlueSky team is going to do a lot of work to make it truly decentralized, and I freely admit, a little bitter. We’ve been doing decentralized social media in the {StatusNet, OStatus, ActivityPub, …} network for, depending on how you figure it, a decade; why does this half-baked Twitter clone get the spotlight?
There are many problems with ActivityPub; but for all its faults, ActivityPub was born decentralized. It has dozens of server implementations, of which perhaps a dozen are usable in production, and hundreds of clients. I myself use a non-Fediverse ActivityPub based network, as well as having an account on the wider Fediverse, using the same software to the same ends but without providing Mastodon gGmbH anything, even content. That’s simply not possible with ATProto and its related software right now. I also don’t see anything in the way of implementation efforts for third-party PDS and relay software (though it’s possible that they just exist outside my bubble.)
The OP perpetuates the same issue elsewhere, too:
As far as I understand, that’s not true, unless we also say there is “no Facebook server” because facebook.com is a distributed system, or that there was no WhatsApp server when they used XMPP internally. There is a BlueSky server - it’s the Relay and the PDSes they own, and they control access completely.
Total user choice - as long as you’re not choosing to run a PDS Bluesky-the-company doesn’t want to federate with. Even beyond that, it’s hard for me to imagine that in a future with, say, thousands of PDSes and hundreds or dozens of Relays, no Application blocks a Relay, and no Relay blocks a PDS. If the response is that this is solved by combining the feeds of multiple Relays, well, that’s entirely possible in other microblogging tools, and it’s not widely implemented in ATProto yet either, because it doesn’t need to be, because… it’s all run by one company.
If what you really want is total user choice, I don’t think we can stop at federation; I think we have to build truly P2P social media. ActivityPub ain’t that, but neither is Bluesky.
I do, and I appreciate the length. I feel we’re going back and forth here, so I’ll probably drop this thread after this, but I do think there’s one part here that’s illustrative of this dichotomy we’re on the opposite ends of:
I can see that. From my perspective:
We are here. Which leads you to say
But to me, what I see, is (these are their direct words): “Here, in this first phase of federation, you can file a request for your self-hosted PDS to be crawled and added to the production federated network. This is an early access phase as federation rolls out. In the next phase, you will not need to file a request through Bluesky.” I also see a team who has consistently promised to move towards openness. One of their mottos is “the company is a future adversary.” They are executing a plan, and continue to execute on that plan.
So, that’s why I’m not with you here: it feels like you’re reading bad faith into a slow and thoughtful rollout process of a core feature of the entire project.
That said, I do think your point about future vs present is apt, and I’m going to be thinking about it for a while. For me though, the present of Mastodon is not fit for purpose, and that’s maybe why I am so forward looking.
(oh, one last thing: “I also don’t see anything in the way of implementation efforts for third-party PDS and relay software (though it’s possible that they just exist outside my bubble.” I don’t yet either; I think that’s because the API isn’t considered to be set in stone just yet. I personally want to reimplement everything, but don’t know if I have the energy or time, and so I don’t want to begin until things are more settled. I suspect many others are in the same boat.)
Yeah, I don’t intend to make this a further back and forth. I appreciate you reading and responding to my wall of nonsense. I genuinely do hope you’re right, but I deeply fear that you’re wrong and that we’ll be stuck with a centralized service with decentralized trappings.
I would also encourage you to take that forward-looking perspective into your personal evaluations of non-Mastodon ActivityPub software. There is a lot of work going on in that space, in a lot of different directions.
I generally agree with your logic but I think this part overstates the value of money. I think 1 personally-motivated programmer beats 10 money-motivated programmers in productivity and quality of output the majority of the time. I think anyone who is personally-motivated to work on non-centralized micro-blogging platforms would be happy with a modest pay, so long as they can live in comfort and with security.
What do you mean exactly by P2P social media? I know your intent is something non-federated but what does that look like in practice? Do you mean every user has a private key and the client is all you need to disseminate messages?
Yep, absolutely agreed. I can give you names of about a dozen people I’d hire if Nora’s Acme ActivityPub LLC got funding today, and I’d bet you anything that we’d eclipse BlueSky in a year or less.
Honestly? I don’t know. I’ve written a bit about it in the past, but the core of my design indecision is that most people - even the kind of people who are willing to self-host an ActivityPub server - don’t want to do PKI, and can’t be relied upon not to fuck it up. I say this as someone who semi-regularly uses PGP-encrypted email.
I do have a vision for a truly P2P design that would probably work pretty well if the RSA fairy came along and solved PKI for us, though.
Whoa, there’s a ton of cool stuff in here.
https://github.com/kormax/apple-enhanced-contactless-polling is particularly neat. I’d always wondered how background NFC tag scanning on iPhones avoided triggering Apple Pay on other devices- turns out there’s just an ignore frame (replaced in iOS 17 with NameDrop).
Amazing post! I’ve always felt that the official docs for commit signing were almost intentionally leaving out details about how revocation is supposed to work. Seems the answer is that it doesn’t really work!
AIUI revocation is a policy embedded within a PKI. Git is PKI agnostic, so…
… it’s interesting how this article only show signature verification as performed by centralised providers. Not a
git log --show-signaturein sight.Hey ~quad, thanks for your time reading through this and commenting.
I’m surprised you’d rather the post be even longer than it already is :p
Jokes aside, I did demonstrate using git-verify-commit to locally verify Gitsign commits.
I didn’t think it was worthwhile to demonstrate similar for GPG or SSH since it’d rapidly become a discussion on key distribution (as ~snej notes, really the disappointing thing here is PKI).
I already noted my frustrations with GPG’s web of trust and S/MIME, but also alluded to similar trouble with gpg.ssh.allowedSignersFile for SSH keys. We could query GitHub’s REST API for known engineers’ signing keys and construct an
allowedSignersFile, but validating old commits remains difficult without a mechanism to communicate key revocation.At least git’s PKI agnosticism is likely a contributor to Gitsign’s early technical feasibility :)
I love a long post, especially when it’s straight to the point with excellent examples like yours! 🥰
You inspired me to dig further into Sigstore and Rekor. At first blush, it seems wrong that commits are going in the CT log and I want to understand what I’m missing!
I suppose the proposed usage matches up with the intent behind existing usage of HTTP status code 420 to signal a rate limit has been exceeded, albeit often labelled “Enhance Your Calm” (popularised by the Twitter REST API)- in a way, a client that misbehaves when faced with rate limiting could be considered an “impaired requester.” :)
Ironically, I learnt about ECH because our corporate Palo alto firewall blocked it by default.
There’s one thing I fail to understand however. Cloudflare says they’ll treat any request with SNI “cloudflare-ech.com” as ECH, but how is the client supposed to send that SNI in the first place ? If I want to reach “randombits.tld”, how do I know that I must use cloudflare’s ECH as the outer SNI ? Is there some magic DNS trick that’s not mentioned in the docs ?
Was wondering the same… there’s an older Cloudflare blog from 2020 that notes reliance on a HTTPS Resource Record for ECH configuration. Their developer docs also suggests corporate networks can break ECH by manipulating/dropping these DNS records.
It seems the structure for this is defined at https://datatracker.ietf.org/doc/html/draft-ietf-tls-esni-16#section-4, specifically
public_namefor the outer SNI value.Appreciate human-to-server auth is the primary focus here, not server-to-server, but I figured it’d be worth noting OIDC tends to be the preferred mechanism (e.g., GitHub or GitLab)- particularly after Travis CI had a breach impacting secrets for OSS repos in ’21 and CircleCI had a breach impacting all secrets in January
–
Passkeys?
edit: ah, /u/yawaramin also noted Passkeys while I was typing up my comment :)
Does anyone have an explanation of what this means? If I understand correctly (guessing at many of the words, from context), this is an IRC host, disabling a thing that lets you transparently project IRC channels as if they were Matrix things and instead requiring you to explicitly configure this? Presumably this is because a lot of people were able to send spam via Matrix to Matrix to IRC and the IRC server had no recourse other than to ban everyone on the closest Matrix hop?
I think your summary is pretty close (Matrix effectively operates an IRC bouncer for portalled rooms)- there’s also a post from Matrix themselves at https://matrix.org/blog/2023/07/deportalling-libera-chat/ which goes into more detail
That post by Matrix is really really good, a masterclass in honest and respectful communication. Neil Johnson, your writing was a joy to read.
Really appreciate the author’s thinking here around psychological safety empowering teams to make decisions that are flexible to change in the future- it reminds me of the practices described in https://kind.engineering/
Since a large part of your critique is focused on signatures signed by outdated keys, it occurs to me that implies that a secure use of public signatures would be to remember all the signatures you’ve made, and periodically update them, even if nothing about the software has changed.
I’m not sure that substituting minisign/ssh whatever the preferred signature tool du jour would make a difference in this regard; this is a shortcoming of build infrastructure.
I understand this is part of the reasoning behind Rekor within Sigstore- a compromised key (due to old algos or leaks) shouldn’t be capable of creating unwanted signatures without being easily detectable.
Admittedly, Sigstore’s Fulcio only issuing keys valid for 10min means meaningful key compromise is far less likely than using long-lived PGP/SSH/minisign keys (you’d hopefully not request a certificate with an algorithm weak enough to be crackable within 10min anyway ^^;).
I think it does? According to this blogpost, ssh will refuse SSHFP entries that are not signed..
Personally, I think SSHFP is a better way to solve the TOFU-problem with SSH, rather than requiring every SSH server to also run a webserver, and announcing their existence via the CT-log (a consequence of requesting a WebPKI certificate).
Hi! Thank you for the feedback. I agree that DNS would be the perfect place for host key fingerprints.
It is possible to configure resolv.conf so that ssh can get that bit of information. How often is that done? And that is not the only problem of DNSSEC.
imo CT logs being mandatory aren’t a bad thing, they allow you to be certain no one has created a certificate for your domain that would be accepted by browsers. You could avoid having particular servers/subdomains be identifiable by issuing each a wildcard certificate (LetsEncrypt even supports free wildcard certs through a DNS challenge).
With DNSSEC, it’s hard to evidence a registrar/the TLD operator hasn’t temporarily changed your DNSSEC keys without constant DNS monitoring. This is particularly worrying considering DNSSEC infrastructure is mostly controlled by world governments.
(my reasoning here is inspired by this masto post & the associated thread, linked to in a blog post linked by the original post.)
I’m not entirely sure I understand your point. You correctly describe why CT (“report all certificate issuances to Google”) is needed to keep WebPKI in check, but the question at hand is if it’s wise to use WebPKI for SSH, which it is not because it would require you to announce your SSH server before you’ve had a chance to set it up correctly.
I don’t know about you, but when I install a new box, I want to keep it off the internet (block incoming traffic except my own, keep it out of DNS) until I’ve set it up entirely. But I can’t do that anymore if it needs an incoming port 80 to the internet to do a little ACME song and dance to be recorded into CT before I can login to it, and I have just announced to the world that I just set up a new server, inviting everyone to start probing whether I did at least configure my firewall correctly.
Because now I suddenly have to run an internet facing webserver before I run SSH (or I need to somehow let this new machine write stuff in its DNS zone, which is hard without an MDM solution in my home lab), and if ACME fails (I didn’t set up DNS, the machine did too many attempts) I’m locked out until Let’s Encrypt lets me in into my own machine again. Not to mention how can you get a certificate for a mobile device, such as a laptop, that is in different networks and might thus not have a static name.
Or is all of this optional, because you can also login without the WebPKI bit? Then an attacker would simply need to block port 443 from your client to the SSH server, and you’re back to TOFU (which according to OP, isn’t good enough). An attacker may even be able to DoS you by dropping the traffic, making your client wait for a long timeout.
I was too focused on DNS in my previous reply. Did you note that the https server doesn’t have to be on the same host?
I missed that, yes. But how exactly does this work then?
curl https://host.domain.tld/.well-known/ssh/host.domain.tld,curl https://domain.tld/.well-known/ssh/host.domain.tld,curl https://tld/.well-known/ssh/host.domain.tld?So for this “bubbling up” to make sense, you suddenly need to involve the public suffix list from Mozilla, to know where you need to stop (you could hardcode stopping at two elements, but then you would still attempt
https://co.uk/…)And it still doesn’t solve the issue, how do you handle errors? When do you decide to abort, or bubble up? If the HTTPS connection times out? If it answers
404? If it answers something else than200or404? How long do you wait for an answer? Is that timeout per server or for the whole process?As you can see, this solution has quite a lot of complexity connected to it when you think about it. Not to say there isn’t complexity in SSHFP+DNSSEC, I agree that DNSSEC is still a bit hard to set up, but that’s a problem with tooling, not the standard (it has been through multiple iterations simplifying it). As for the client setup, currently you may need to add
trust-adto your resolv.conf, but it doesn’t have to be like that.I don’t see how any telemetry transmitted via the internet that is opt-out is not a direct violation of the GDPR. The IP address that is transmitted with it (in the IP packets) is protected information that you don’t have consent to collect - you failed at step 0 and broke the law before you even received the bits you actually care about.
Of course, the GDPR seems to be going routinely unenforced except against the largest and most blatant violations, but I really don’t see why a company like google would risk it. Why other large companies are actively risking it.
My understanding of the GDPR was that IP addresses are not automatically PII. Even in situations where they are, simply receiving a connection from an IP address does not incur any responsibilities because you require the IP for technical reasons to maintain the connection. It’s only when you record the IP address that it may hit issues. You can generally use some fairly simple differential privacy features to manage this (e.g. drop one of the bytes from your log).
The EU has ruled that IP addresses are GDPR::PII, sadly.
There’s nothing sad about it. I bet that you think that your home address, ICBM coordinates, etc. are PII too.
Do you have a link to that ruling, I’d be very interested in reading it.
(emphasis mine. via the GDPR text, Regulation (EU) 2016/679)
fwiw- “PII” is a US-centric term that isn’t used within GDPR, which instead regulates “processing personal data”.
This doesn’t actually say that collecting IP addresses is not allowed. It only states that when the natural person is known, online identifiers could be used to create profiles.
Furthermore this is only relevant if those online identifiers are actually processed and stored. According to the Google proposal they are not. They only keep record of the anonymous counters. Which is 100% fine with GDPR.
(IANAL) I’d seen analytics software like Fathom and GoatCounter rely on (as you mention) anonymised counters to avoid creating profiles on natural persons, but also we’ve seen a court frown upon automatic usage of Google Fonts due to automatic transmission of IP addresses to servers in the US.
It’s a shame the go compiler isn’t well positioned UX-wise to ask users for opt-in consent at installation (as an IDE might) since that’d likely solve privacy concerns while reaching folk that don’t know about an opt-in config flag.
[admittedly, Google already receives IP addresses of Go users through https://proxy.golang.org/ anyway (which does log IP addresses, but “for [no] more than 30 days”) ¯\_(ツ)_/¯]
Yes IP addresses are not automatically PII, but if you can’t enforce they are not you must assume they are. The telemetry data itself is probably not PII, because it’s anonymized.
GDPR prohibits processing[0] of (private) data, but contains some exceptions. The most common used one is to full fill a contract (this doesn’t need to be a written down contract with payment). So assume you have an online shop. A user orders i.e. a printer you need his address to send the printer to him. But when the user orders a ebook you don’t need the address because you don’t need to ship the ebook. In the case of go the service would be compiling go code. I don’t see a technical requirement to send google your IP-Address.
Next common exception is some requirement by other law (i.e. tax-law or money laundering protection law). I think there is none.
Next one is user consents: You know these annoying cookie banner. Consents must be explicit and can’t be assumed (and dark pattern are prohibit). So this requires an opt-in.
Next one would be legitimate interest. This is more or less the log file exception. Here you might argue that the go team needs this data to improve there compiler. I don’t think this would stand, because other compiler work pretty well without telemetry.
So all together I[1] would say the only legal way to collect the telemetry data is some sort of user consent.
[0] Yes processing not only storing, so having a web server answering http requests might also falls under GDPR.
[1] I’m not a lawyer
You are wrong. The GDPR is not some magic checkbox that says “do not ever send telemetry”. The GDPR cares about PII and your IP address and a bunch of anonymous counters are simply not PII. There is nothing to enforce in this case.
If something is permitted by the law, it doesn’t automatically mean it’s also good
It’s a good thing that nobody’s arguing that, then.
Hah, you’re right, I must have mixed up two comments. Glad we all agree then :)
And what, exactly, is so wrong about MitM yourself, on your own network? Have we been so gaslit by “security specialists” that doing so on our own equipment is considered unthinkable? Or am I just an old man yelling at clouds?
There’s lots of research on the prevalence of people screwing up TLS interception like this (I recently looked for some so my team at work would have ammunition for refusing to do so on work laptops, which we manage).
That being said there’s a lot going for this approach:
t.co, instead of generic TLS connections which significantly reduces attack surface (and complexity!).🤷 seems PROBABLY okayish even though it makes me sweat a little! Not that I am an expert.
On point #3, the Name Constraints Extension appears to be a good mitigation toward someone hijacking the root CA.
http://pkiglobe.org/name_constraints.html and https://www.sysadmins.lv/blog-en/x509-name-constraints-certificate-extension-all-you-should-know.aspx have interesting notes on how these constraints get applied to entity certificates by clients.
Unfortunately it seems some browsers only apply name constraints to intermediate CAs (https://bugs.chromium.org/p/chromium/issues/detail?id=1072083), so even this might not be a silver bullet.
Admittedly, if you’re only hijacking one hostname, you might as well self-sign an entity certificate for the target hostname and directly add it to your trust stores (without creating a self-signed CA).
Relatedly, I wish there was a way to add a CA to your trust base BUT only for certain specific domains and subdomains.
There’s a standard that exists for that. I was party toimplementations of that, but I don’t think it got much traction on the internet at large. The easiest mainstream way is to certify it using a root that you control and add name constraints, but for that to be secure (in a general way) you need to own both CAs.
I didn’t know CA name constraints are a thing. Thank you.
Hehe, I guess I was just preparing for the deluge of disapprovals so spent a good while explaining myself!
Agree. This is not a terrible solution and I don’t see why it wouldn’t be recommended. This is a great hack and I love it.
Not to say this is a bad solution, but what happens when your friends come over and ask to use your wifi? Presumably they haven’t installed your CA’s root cert. (Ignore for a moment the fact that obviously any TRUE friend would install their friend’s root cert.)
Anyway the benefits outweigh the downsides, but it’s something to think about.
A much better solution is to abolish
t.coaltogether, which is now a lot closer to happening than I would have dared to hope six months ago! I haven’t followed a t.co link in months, and with any luck never will again, but I understand others might not be so fortunate at this time.I operate an open wifi for friends to use, but that’s a fair point
So you would only do this interception on the private WiFi then?
This is a fair point.
For my situation I actually don’t have Adguard as the DNS resolver on my router, mainly because I’ve never been able to get it to work, so I just update the DNS manually on devices instead - so friends and family won’t be affected (but they wouldn’t be able to use this tool) unless they specifically set the DNS resolver on their phones.
It’s a fair point though, I guess creating an isolated VLAN/guest network for guests would be another way around this.
Another valid reply is that t.co is inherently sketchy as hell, and getting a warning when you’re accessing it isn’t necessarily a bad thing. (But it would be better if the warning were clearer about the specific problem.)
It seems fears around recovery/device migration are a significant part of the rationale behind Apple’s passkeys implementation requiring iCloud Keychain sync https://twitter.com/rmondello/status/1534914697123667969 (referencing https://developer.apple.com/forums/thread/707539)
n.b. Passkey is a generic term for FIDO/WebAuthn credentials, which PyPI’s 2FA supports in addition to TOTP. PyPI also require you to record a set of recovery codes and ask you to recite a code back during their 2FA setup process.
Speaking as someone who worked in the hosting biz and had to deal with this stuff, fears around recovery and device migration are all too legitimate. “I lost my 2FA” was one of my most-loathed support requests. Usually it was “I used the authenticator app on my old phone and forgot to migrate”.
As the article hints at, what makes MFA really viable is the hidden factor: human-to-human / human-to-organization relationships. Social relationships, not technical ones.
I’m also not comfortable with $bigtech_corp setting itself up as a trusted intermediary for the same reason. $bigtech_corp tends to be all about lack of accountability and destroying legitimate social relationships.
I have questions not answers, problems not solutions.
Or “my old phone is now toast and I forgot the authenticator was there and there goes all my access”
Thankfully I had my core device codes backed up, but some stuff I just had to write off to no longer having access to because there wasn’t a support team to engage.
I moved phones several years ago and had some but not all TFA codes migrate. Fortunately I noticed before I sent the old phone to recycling but jeez why was that a possible failure mode? All or none, ffs.
That recovery thing was my biggest concern when getting my old SE repaired and the upgrade to the 13. It went well though, but I always think about those things.
What is the interest of the EU to get involved in certificate issuance? Is it bureaucracy overreach or is there something else behind this effort?
My guess would be a genuine feeling that it’s not good for EU people that an American advertising company, an American browser vendor, an American computer company, and an American software company functionally control who’s allowed to issue acceptable certificates worldwide.
Sure, but then the answer is that the EU should make Mozilla open an office in Brussels or somewhere and then shovel money at FireFox, so that they have their own player in the browser wars. Tons of problems are created for users by the fact that Google and Apple have perverse incentives for their browser (and that Mozilla’s incentive is just to figure out some source, any source of funding). Funding Mozilla directly would give EU citizens a voice in the browser wars and provide an important counterbalance to the American browsers.
Directly funding a commercial entity tasked with competing with foreign commercial entities is a huge problem; Airbus and Boeing have had disputes about that for a long time: https://en.wikipedia.org/wiki/Competition_between_Airbus_and_Boeing#World_Trade_Organization_litigation
On the other side, passing laws that require compliance from foreign firms operating in the EU has been successful; for as much as it sucks and is annoying to both comply with and use websites that claim to comply with it, the GDPR has been mostly complied with.
A) In an EU context, it’s hard to argue that Aerobus hasn’t been successful for promoting European values. If the WTO disagrees, that’s because the WTO’s job is not to promote European values. I can’t really imagine how Google or Apple could win a lawsuit against the EU for funding a browser since they give their browsers away for free, but anyone can file a lawsuit about anything, I suppose.
B) I don’t see how anyone can spend all day clicking through pointless banners and argue that the current regulatory approach is successfully promoting EU values. The current approach sucks and is not successful. Arguably China did more to promote its Chinese values with Tiktok than all the cookie banners of the last six years have done for the EU’s goals.
None of this is about “promoting EU values.”
The EU government’s goal for Airbus is to take money from the rest of the world and put it in European paychecks.
The goal of the GDPR is to allow people in Europe a level of consent and control over how private surveillance systems watch them. The GDPR isn’t just the cookie banners; it’s the idea that you can get your shit out of facebook and get your shit off facebook, and that facebook will face consequences when it comes to light that they’ve fucked that up.
Google could absolutely come up with a lawsuit if the EU subsidizes Mozilla enough to let Mozilla disentangle from Google and start attacking Google’s business by implementing the same privacy features that Apple does.
Yes, and it’s a failure because everyone just clicks agree, since the don’t track me button is hidden.
That’s one answer, but what does it have to be “the” answer?
A trusted and secure European e-ID - Regulation, linked to in the article’s opening, is a revision of existing eIDAS regulation aiming to facilitate interoperable eID schemes in Member States. eIDAS is heavily reliant on X.509 (often through smartcards in national ID cards) to provide a cryptographic identity.
The EU’s interest in browser Certificate Authorities stems from the following objective in the draft regulation:
… to be implemented through a replacement to Article 45:
Mozilla’s November 2021 eIDAS Position Paper, also linked in the original article, goes into more detail about the incompatibilities with the ‘Qualified Website Authentication Certificates’ scheme and the CA/Browser Forum’s policies.
Well, the government are not joking. What happened to medical confidentiality?
Having to prove you have a vaccination has been a requirement in all manner of situations before this - like international travel.
I live in France, and a number of vaccines are already mandatory (for obvious public health reasons).
I’ve never had to present a proof of vaccination when I go to the theatre. Or Theme park. Or anywhere within my country for that matter. Even for international travel, didn’t need to give the USA such proof when I came to see the total solar eclipse in 2019. I’ve also never had to disclose the date of my vaccines, or any information about my health.
What you call “all manner of situation” is actually very narrow. This certificate is something new. A precedent.
This is why you’ve not been asked for proof for international travel, since it’s assumed that you’ll have received these immunisations or be unexposed through herd immunity as someone who resides in France.
We’re currently in a migration period where some people are immunised and others aren’t. We’ve had this happen before– the WHO is responsible for coordinating the Carte Jaune standard (first enforced on 1 August 1935) to aid with information sharing, but they haven’t extended it to include COVID-19 immunisation yet.
In a 1972 article, the NYTimes headlines “Travel Notes: Immunization Cards No Longer Needed for European Trips” regarding Smallpox immunisations.
Still, even today, immigrants applying to the United States for permanent residency remain required to present evidence of vaccinations recommended by the CDC: https://www.cdc.gov/immigrantrefugeehealth/laws-regs/vaccination-immigration/revised-vaccination-immigration-faq.html#whatvaccines
(Note: international travel is one use case where I believe it’s perfectly legitimate to ask for a evidence of vaccination. It’s the only way a country can make sure it won’t get some public health problems on its hand, which makes it a matter of sovereignty.)
It’s not the government that’s sharing this information. It’s you when you present that QR code. This is equivalent to your doctor printing out a piece of your medical records and handing it to you. You can do whatever the hell you want with that piece. It’s your medical history. If you want to show it to someone, you can. If you don’t want to show it to someone, you can. The government only issues the pass. Nothing more.
The QR code has a very important difference with a piece of paper one would look at: its contents are trivially recorded. A piece of paper on the other hand is quickly be forgotten.
No, this is equivalent to me printing out a piece of my medical record and handing it to the guard at the entrance of the theatre. And I’m giving them way more than what they need to know. They only need a cryptographic certificate with an expiration date, and I’m giving them when I got my shot or whether I’ve been naturally infected. I can already see insurance companies buying data from security companies.
There’s a significant difference between the US and the EU here, that is worth emphasising. In the US, your personal information, (such as your medical history) is kind of your property. You can give it or sell it and all sorts of things. In the EU however your personal information is a part of you, and as such is less alienable than your property. I personally align with the EU more that the US on this one, because things that describes you can be used to influence, manipulate, and in some case persecute you.
Do I really have that choice? Can I really chose not to show my medical history if it means not showing up at the theatre or any form of crowded entertainment ever? Here’s another one: could you actually chose not to carry a tracking device with you nearly at all times? Can you live with the consequences of no longer owning a cell phone?
If you carry a tracking device with you at all times, why do you care about sharing your vaccination status? And why should someone medically unable to be vaccinated care about your privacy when their life is at risk?
As someone who’s father is immunocompromised, and with a dear friend who could not receive the vaccine due to a blood disease, fuck off. People have died.
Since you’re forcing my hand, know that I received my first injection not long ago, and have my appointment for the second one. Since I have good health, I don’t mind sharing too much.
What I do mind is that your father and dear friend have to share their information. Your father will likely need more than 2 injections. If it’s written, we can suspect immunocompromission. Your friend will be exempt. If it’s written, we can suspect some illness. That makes them vulnerable, and I don’t want that. They may not want that.
Now let’s say we do need that certificate. Because yes, I am willing to give up a sliver of liberty for the health of us all. The certificate only needs 3 things:
That’s it. People reading the QR-code can automatically know whether you’re clear or not, and they don’t need to know why.
I do not carry that device by choice. The social expectation that people can call me at any time is too strong. I’m as hooked as any junkie now.
I appreciate your willingness, your previous comments made me think you weren’t. I apologize for my hostility. I think we can agree we should strive to uphold privacy to the utmost, but not at the expense of lives.
That’s true, and that system would be more secure. But the additional detail could provide utility that outweighs that concern.
Insurance companies already have access to your medical history in the US. Equitable health care is an ongoing struggle here. ¯\_(ツ)_/¯
Edit: I removed parts about US law that could be incorrect, as IANAL.
Deep breath, C-f HIP … sigh
Damn it, no, this is incredibly wrong.
HIPAA applies to covered entities and business associates only. Covered entities are health care providers, insurance plans, and clearinghouses/HIEs. Business associates are companies that provide services to covered entities – so if you are an independent medical coder that reads doctor notes and assigns ICD10 codes, you’re covered because you provide services to a covered entity. How do you know if you’re a business associate? You’ve signed a BAA.
Movie theaters are not covered entities, and are not business associates. HIPAA has zero bearing on what they do. Your movie theater clerk could absolutely mandate you share your vaccination status – just like your doughnut vendor can ask in exchange for a free doughnut.
Yeah. As the movie theater is private property, and “unvaccinated” isn’t a protected group, they are allowed to discriminate all they want.
But I admit I am surprised they’d legally be able to store and sell your medical records. It seems you’re correct, and I had incorrectly generalized my experience and knowledge dealing with other covered entities all day to non-covered entities. A classic blunder of a programmer speaking about law, whoops. I’ve cut those statements from my prior comment.
I still don’t think that vaccination information would be any news to insurance companies, but I’m yet again disappointed by US privacy law.
It is conceivable you could make an ADA argument here – “I can’t get a COVID vaccination due to a medical condition; therefore, you need to provide a reasonable accommodation to me”. But that’s maybe a stretch, I’m not sure.
I think a lot of this comes down to training about HIPAA. If you’re in-scope for HIPAA, many places (rightfully) treat PHI as radioactive and communicate that to employees. And there’s very little risk in overstating the risk around mishandling PHI - it’s far safer to overmessage the dangers to people who work with it.
Indeed, until I needed to get involved on the compliance side – after all, somebody has to quote HITRUST controls for RFPs – I overfit HIPAA as well.
If you want to feel marginally better, go read up on 42 CFR Part 2. It still only applies to covered entities but it offers real, meaningful protections to an especially vulnerable population: people seeking treatment for substance use disorder. It also makes restrictions around HIPAA data handling look trivial.
Possibly. That would need to be studied and justified, I believe.
Actually that’s what I expect from official programs, including in France. The problem is the QR code itself: any program can read it, and it’s too easy (and therefore tempting) to write or use a program that displays (or record!) everything.
Hmm, that less horrible than I thought then. Glad to hear it.
As @owen points out, IANAL and these laws don’t apply in this circumstance. I still don’t think that vaccination information would be any news to insurance companies, but I’m yet again disappointed by US privacy law.
It’s interesting the SMART Health Card standard implemented here is entirely incompatible with the Digital COVID Certificate standard (Interoperable 2D Code, pdf) being rolled out in the EU (and currently used for the digital NHS England COVID Pass).
Perhaps the IATA Travel Pass will be more successful as a unifying standard.
Just for fun, the contents of the EU covid cert have a much more concise-looking schema than the US one (less XML-y deep structure and magic URLs). And the European container seems to be CBOR + Base45 vs. the US one JSON base64’d then run through a transform that doubles byte count turning everything into decimal digits. Both use gzip. (Ed: turns out QR codes have a numeric encoding that makes three decimal digits only take ten bits, so the US way is transmitting 6 bits in 6 and 2/3 bits on average, ~90% efficient. And Base45 gets 16 bits in three 5.5-bit chars, ~97% efficient. Now it all makes more sense!)
Interesting that both versions seem to fit in that size QR code (must just be able to hold a lot); I’d’ve thought even with gzip, everything in the US structure would be a tight fit.
Note that what the US one is using is a standardised interoperable healthcare format called FHIR. The json representation looks pretty verbose, but handles many things you’d forget when coming up with your own format to represent healthcare data.
Just look at the FHIR R4 definition for
HumanNamein context ofPatientname HumanName 0..*: A person may have 0, 1, or more namesHumanName:use {usual, temporary, official, nickname, maiden, ...}: The context of thisHumanName; does this person use it as a nickname, is it the person’s maiden name, …family string 0..1: May or may not have a family namegiven string 0..*: 0 or more given names (usually surname)period: 0..1 Period: The time period this name was/is/will be usedAnd this is just a small extract from just the
HumanNamedata type. FHIR also has a system to manage logical IDs as well as external IDs (i.e. if aPatientis tracked in different databases in a hospital), support for various code systems used in healthcare (ICD-10, CPT, …), the most complex/complete system to handle temporal information I’ve seen, a super-integrated extension mechanism, …The whole documentation, data schema definition and basically everything is also completely machine-readable.
It’s very complex, but I recommend everyone who does some sort of data modelling to take a look at some of the concepts. It’s a great inspiration.
Source: I’ve been working with FHIR for a few years now :-)
I always loved how Stripe’s REST API handles opaque IDs as a way to prevent confusion. While the Backwards-compatible changes documentation calls out “adding or removing fixed prefixes” as a backwards-compatible change, you’ll notice opaque IDs generated by Stripe usually include a short, human-readable prefix describing the ID. Some examples:
pk_test_TYooMQauvdEDq54NiTphI7jxsk_test_4eC39HqLyjWDarjtT1zdp7dcch_1IVMF02eZvKYlo2CyPTlPI5atxn_1032HU2eZvKYlo2CEPtcnUvlcard_19yUNL2eZvKYlo2CNGsN6EWHYou’re not meant to rely on these within your own code (I think some of the other suggestions in this post around strict type systems are far more applicable in that case), but they’re brilliant sanity-checks while running through a debugger’s stack view to make sure you’ve not accidentally referenced the wrong variable. Doubly so since Stripe’s documentation provides examples of the fixed prefixes for their API responses.
This is nice, and probably works well with the more “dynamic” languages used on the web. I wonder if they use this representation in the database as well, or if this is somehow “decoded” somewhere, and if it is, what they use as an internal representation.