Ironically, I learnt about ECH because our corporate Palo alto firewall blocked it by default.
There’s one thing I fail to understand however. Cloudflare says they’ll treat any request with SNI “cloudflare-ech.com” as ECH, but how is the client supposed to send that SNI in the first place ? If I want to reach “randombits.tld”, how do I know that I must use cloudflare’s ECH as the outer SNI ? Is there some magic DNS trick that’s not mentioned in the docs ?
Was wondering the same… there’s an
older Cloudflare blog from 2020 that notes reliance on a HTTPS Resource Record for ECH configuration. Their developer docs also suggests corporate networks can break ECH by manipulating/dropping these DNS records.
API keys are used to secure the highest-stakes APIs that exist today — all of AWS’s services, for example. Yet while API keys seem to be considered an entirely reasonable and industry standard design approach, passwords are now considered the unwelcome black sheep whose role as a sufficient criterion for authentication is viewed with increasing dubiousness.
Since the user-specified password functionality is now seemingly so distrusted as a widespread industry practice, it raises the question of why not just either use only TOTP for login, or issue a password in the same way that TOTP secrets are issued: randomly and non-customisably.
Does anyone have an explanation of what this means? If I understand correctly (guessing at many of the words, from context), this is an IRC host, disabling a thing that lets you transparently project IRC channels as if they were Matrix things and instead requiring you to explicitly configure this? Presumably this is because a lot of people were able to send spam via Matrix to Matrix to IRC and the IRC server had no recourse other than to ban everyone on the closest Matrix hop?
I think your summary is pretty close (Matrix effectively operates an IRC bouncer for portalled rooms)- there’s also a post from Matrix themselves at https://matrix.org/blog/2023/07/deportalling-libera-chat/ which goes into more detail
Really appreciate the author’s thinking here around psychological safety empowering teams to make decisions that are flexible to change in the future- it reminds me of the practices described in https://kind.engineering/
Since a large part of your critique is focused on signatures signed by outdated keys, it occurs to me that implies that a secure use of public signatures would be to remember all the signatures you’ve made, and periodically update them, even if nothing about the software has changed.
I’m not sure that substituting minisign/ssh whatever the preferred signature tool du jour would make a difference in this regard; this is a shortcoming of build infrastructure.
I understand this is part of the reasoning behind Rekor within Sigstore- a compromised key (due to old algos or leaks) shouldn’t be capable of creating unwanted signatures without being easily detectable.
Admittedly, Sigstore’s Fulcio only issuing keys valid for 10min means meaningful key compromise is far less likely than using long-lived PGP/SSH/minisign keys (you’d hopefully not request a certificate with an algorithm weak enough to be crackable within 10min anyway ^^;).
Personally, I think SSHFP is a better way to solve the TOFU-problem with SSH, rather than requiring every SSH server to also run a webserver, and announcing their existence via the CT-log (a consequence of requesting a WebPKI certificate).
Hi! Thank you for the feedback. I agree that DNS would be the perfect place for host key fingerprints.
It is possible to configure resolv.conf so that ssh can get that bit of information. How often is that done? And that is not the only problem of DNSSEC.
imo CT logs being mandatory aren’t a bad thing, they allow you to be certain no one has created a certificate for your domain that would be accepted by browsers. You could avoid having particular servers/subdomains be identifiable by issuing each a wildcard certificate (LetsEncrypt even supports free wildcard certs through a DNS challenge).
With DNSSEC, it’s hard to evidence a registrar/the TLD operator hasn’t temporarily changed your DNSSEC keys without constant DNS monitoring. This is particularly worrying considering DNSSEC infrastructure is mostly controlled by world governments.
I’m not entirely sure I understand your point. You correctly describe why CT (“report all certificate issuances to Google”) is needed to keep WebPKI in check, but the question at hand is if it’s wise to use WebPKI for SSH, which it is not because it would require you to announce your SSH server before you’ve had a chance to set it up correctly.
I don’t know about you, but when I install a new box, I want to keep it off the internet (block incoming traffic except my own, keep it out of DNS) until I’ve set it up entirely. But I can’t do that anymore if it needs an incoming port 80 to the internet to do a little ACME song and dance to be recorded into CT before I can login to it, and I have just announced to the world that I just set up a new server, inviting everyone to start probing whether I did at least configure my firewall correctly.
Because now I suddenly have to run an internet facing webserver before I run SSH (or I need to somehow let this new machine write stuff in its DNS zone, which is hard without an MDM solution in my home lab), and if ACME fails (I didn’t set up DNS, the machine did too many attempts) I’m locked out until Let’s Encrypt lets me in into my own machine again. Not to mention how can you get a certificate for a mobile device, such as a laptop, that is in different networks and might thus not have a static name.
Or is all of this optional, because you can also login without the WebPKI bit? Then an attacker would simply need to block port 443 from your client to the SSH server, and you’re back to TOFU (which according to OP, isn’t good enough). An attacker may even be able to DoS you by dropping the traffic, making your client wait for a long timeout.
first curl https://host.domain.tld/.well-known/ssh/host.domain.tld,
then curl https://domain.tld/.well-known/ssh/host.domain.tld,
and then? curl https://tld/.well-known/ssh/host.domain.tld?
So for this “bubbling up” to make sense, you suddenly need to involve the public suffix list from Mozilla, to know where you need to stop (you could hardcode stopping at two elements, but then you would still attempt https://co.uk/…)
And it still doesn’t solve the issue, how do you handle errors? When do you decide to abort, or bubble up? If the HTTPS connection times out? If it answers 404? If it answers something else than 200 or 404? How long do you wait for an answer? Is that timeout per server or for the whole process?
I don’t see how any telemetry transmitted via the internet that is opt-out is not a direct violation of the GDPR. The IP address that is transmitted with it (in the IP packets) is protected information that you don’t have consent to collect - you failed at step 0 and broke the law before you even received the bits you actually care about.
Of course, the GDPR seems to be going routinely unenforced except against the largest and most blatant violations, but I really don’t see why a company like google would risk it. Why other large companies are actively risking it.
My understanding of the GDPR was that IP addresses are not automatically PII. Even in situations where they are, simply receiving a connection from an IP address does not incur any responsibilities because you require the IP for technical reasons to maintain the connection. It’s only when you record the IP address that it may hit issues. You can generally use some fairly simple differential privacy features to manage this (e.g. drop one of the bytes from your log).
(30) Natural persons may be associated with online identifiers provided by their devices, applications, tools and protocols, such as internet protocol addresses, cookie identifiers or other identifiers such as radio frequency identification tags. This may leave traces which, in particular when combined with unique identifiers and other information received by the servers, may be used to create profiles of the natural persons and identify them.
This doesn’t actually say that collecting IP addresses is not allowed. It only states that when the natural person is known, online identifiers could be used to create profiles.
Furthermore this is only relevant if those online identifiers are actually processed and stored. According to the Google proposal they are not. They only keep record of the anonymous counters. Which is 100% fine with GDPR.
It’s a shame the go compiler isn’t well positioned UX-wise to ask users for opt-in consent at installation (as an IDE might) since that’d likely solve privacy concerns while reaching folk that don’t know about an opt-in config flag.
Yes IP addresses are not automatically PII, but if you can’t enforce they are not you must assume they are. The telemetry data itself is probably not PII, because it’s anonymized.
GDPR prohibits processing[0] of (private) data, but contains some exceptions. The most common used one is to full fill a contract (this doesn’t need to be a written down contract with payment). So assume you have an online shop. A user orders i.e. a printer you need his address to send the printer to him. But when the user orders a ebook you don’t need the address because you don’t need to ship the ebook. In the case of go the service would be compiling go code. I don’t see a technical requirement to send google your IP-Address.
Next common exception is some requirement by other law (i.e. tax-law or money laundering protection law). I think there is none.
Next one is user consents: You know these annoying cookie banner. Consents must be explicit and can’t be assumed (and dark pattern are prohibit). So this requires an opt-in.
Next one would be legitimate interest. This is more or less the log file exception. Here you might argue that the go team needs this data to improve there compiler. I don’t think this would stand, because other compiler work pretty well without telemetry.
So all together I[1] would say the only legal way to collect the telemetry data is some sort of user consent.
[0] Yes processing not only storing, so having a web server answering http requests might also falls under GDPR.
You are wrong. The GDPR is not some magic checkbox that says “do not ever send telemetry”. The GDPR cares about PII and your IP address and a bunch of anonymous counters are simply not PII. There is nothing to enforce in this case.
And what, exactly, is so wrong about MitM yourself, on your own network? Have we been so gaslit by “security specialists” that doing so on our own equipment is considered unthinkable? Or am I just an old man yelling at clouds?
There’s lots of research on the prevalence of people screwing up TLS interception like this (I recently looked for some so my team at work would have ammunition for refusing to do so on work laptops, which we manage).
That being said there’s a lot going for this approach:
Go’s TLS library is probably pretty reasonable and is likely to prevent a lot of common footguns here - not passing on certificate validation failures from the upstream origin, etc.
You’re only doing it for one website, t.co, instead of generic TLS connections which significantly reduces attack surface (and complexity!).
You’re not doing this at scale/you’re probably not a target. Yeah someone could do Bad Things™ with your root CA certificate if they got onto your network, but on a typical home network you’ve got bigger problems then. So meh?
🤷 seems PROBABLY okayish even though it makes me sweat a little! Not that I am an expert.
Admittedly, if you’re only hijacking one hostname, you might as well self-sign an entity certificate for the target hostname and directly add it to your trust stores (without creating a self-signed CA).
There’s a standard that exists for that. I was party toimplementations of that, but I don’t think it got much traction on the internet at large. The easiest mainstream way is to certify it using a root that you control and add name constraints, but for that to be secure (in a general way) you need to own both CAs.
Not to say this is a bad solution, but what happens when your friends come over and ask to use your wifi? Presumably they haven’t installed your CA’s root cert. (Ignore for a moment the fact that obviously any TRUE friend would install their friend’s root cert.)
Anyway the benefits outweigh the downsides, but it’s something to think about.
A much better solution is to abolish t.co altogether, which is now a lot closer to happening than I would have dared to hope six months ago! I haven’t followed a t.co link in months, and with any luck never will again, but I understand others might not be so fortunate at this time.
For my situation I actually don’t have Adguard as the DNS resolver on my router, mainly because I’ve never been able to get it to work, so I just update the DNS manually on devices instead - so friends and family won’t be affected (but they wouldn’t be able to use this tool) unless they specifically set the DNS resolver on their phones.
It’s a fair point though, I guess creating an isolated VLAN/guest network for guests would be another way around this.
Another valid reply is that t.co is inherently sketchy as hell, and getting a warning when you’re accessing it isn’t necessarily a bad thing. (But it would be better if the warning were clearer about the specific problem.)
n.b. Passkey is a generic term for FIDO/WebAuthn credentials, which PyPI’s 2FA supports in addition to TOTP. PyPI also require you to record a set of recovery codes and ask you to recite a code back during their 2FA setup process.
It seems fears around recovery/device migration are a significant part
of the rationale behind Apple’s passkeys implementation requiring iCloud
Speaking as someone who worked in the hosting biz and had to deal with
this stuff, fears around recovery and device migration are all too
legitimate. “I lost my 2FA” was one of my most-loathed support
requests. Usually it was “I used the authenticator app on my old phone
and forgot to migrate”.
As the article hints at, what makes MFA really viable is the hidden
factor: human-to-human / human-to-organization relationships. Social
relationships, not technical ones.
I’m also not comfortable with $bigtech_corp setting itself up as a
trusted intermediary for the same reason. $bigtech_corp tends to be all
about lack of accountability and destroying legitimate social
relationships.
I have questions not answers, problems not solutions.
Usually it was “I used the authenticator app on my old phone and forgot to migrate”.
Or “my old phone is now toast and I forgot the authenticator was there and there goes all my access”
Thankfully I had my core device codes backed up, but some stuff I just had to write off to no longer having access to because there wasn’t a support team to engage.
I moved phones several years ago and had some but not all TFA codes migrate. Fortunately I noticed before I sent the old phone to recycling but jeez why was that a possible failure mode? All or none, ffs.
That recovery thing was my biggest concern when getting my old SE repaired and the upgrade to the 13. It went well though, but I always think about those things.
My guess would be a genuine feeling that it’s not good for EU people that an American advertising company, an American browser vendor, an American computer company, and an American software company functionally control who’s allowed to issue acceptable certificates worldwide.
Sure, but then the answer is that the EU should make Mozilla open an office in Brussels or somewhere and then shovel money at FireFox, so that they have their own player in the browser wars. Tons of problems are created for users by the fact that Google and Apple have perverse incentives for their browser (and that Mozilla’s incentive is just to figure out some source, any source of funding). Funding Mozilla directly would give EU citizens a voice in the browser wars and provide an important counterbalance to the American browsers.
On the other side, passing laws that require compliance from foreign firms operating in the EU has been successful; for as much as it sucks and is annoying to both comply with and use websites that claim to comply with it, the GDPR has been mostly complied with.
A) In an EU context, it’s hard to argue that Aerobus hasn’t been successful for promoting European values. If the WTO disagrees, that’s because the WTO’s job is not to promote European values. I can’t really imagine how Google or Apple could win a lawsuit against the EU for funding a browser since they give their browsers away for free, but anyone can file a lawsuit about anything, I suppose.
B) I don’t see how anyone can spend all day clicking through pointless banners and argue that the current regulatory approach is successfully promoting EU values. The current approach sucks and is not successful. Arguably China did more to promote its Chinese values with Tiktok than all the cookie banners of the last six years have done for the EU’s goals.
The EU government’s goal for Airbus is to take money from the rest of the world and put it in European paychecks.
The goal of the GDPR is to allow people in Europe a level of consent and control over how private surveillance systems watch them. The GDPR isn’t just the cookie banners; it’s the idea that you can get your shit out of facebook and get your shit off facebook, and that facebook will face consequences when it comes to light that they’ve fucked that up.
Google could absolutely come up with a lawsuit if the EU subsidizes Mozilla enough to let Mozilla disentangle from Google and start attacking Google’s business by implementing the same privacy features that Apple does.
A trusted and secure European e-ID - Regulation, linked to in the article’s opening, is a revision of existing eIDAS regulation aiming to facilitate interoperable eID schemes in Member States. eIDAS is heavily reliant on X.509 (often through smartcards in national ID cards) to provide a cryptographic identity.
The EU’s interest in browser Certificate Authorities stems from the following objective in the draft regulation:
They should recognise and display Qualified certificates for website
authentication to provide a high level of assurance, allowing website owners to assert
their identity as owners of a website and users to identify the website owners with a
high degree of certainty.
… to be implemented through a replacement to Article 45:
Qualified certificates for website authentication referred to in paragraph 1 shall
be recognised by web-browsers. For those purposes web-browsers shall ensure
that the identity data provided using any of the methods is displayed in a user
friendly manner.
Mozilla’s November 2021 eIDAS Position Paper, also linked in the original article, goes into more detail about the incompatibilities with the ‘Qualified Website Authentication Certificates’ scheme and the CA/Browser Forum’s policies.
I live in France, and a number of vaccines are already mandatory (for obvious public health reasons).
I’ve never had to present a proof of vaccination when I go to the theatre. Or Theme park. Or anywhere within my country for that matter. Even for international travel, didn’t need to give the USA such proof when I came to see the total solar eclipse in 2019. I’ve also never had to disclose the date of my vaccines, or any information about my health.
What you call “all manner of situation” is actually very narrow. This certificate is something new. A precedent.
and a number of vaccines are already mandatory (for obvious public health reasons).
This is why you’ve not been asked for proof for international travel, since it’s assumed that you’ll have received these immunisations or be unexposed through herd immunity as someone who resides in France.
We’re currently in a migration period where some people are immunised and others aren’t. We’ve had this happen before– the WHO is responsible for coordinating the Carte Jaune standard (first enforced on 1 August 1935) to aid with information sharing, but they haven’t extended it to include COVID-19 immunisation yet.
(Note: international travel is one use case where I believe it’s perfectly legitimate to ask for a evidence of vaccination. It’s the only way a country can make sure it won’t get some public health problems on its hand, which makes it a matter of sovereignty.)
It’s not the government that’s sharing this information. It’s you when you present that QR code. This is equivalent to your doctor printing out a piece of your medical records and handing it to you. You can do whatever the hell you want with that piece. It’s your medical history. If you want to show it to someone, you can. If you don’t want to show it to someone, you can. The government only issues the pass. Nothing more.
The QR code has a very important difference with a piece of paper one would look at: its contents are trivially recorded. A piece of paper on the other hand is quickly be forgotten.
This is equivalent to your doctor printing out a piece of your medical records and handing it to you.
No, this is equivalent to me printing out a piece of my medical record and handing it to the guard at the entrance of the theatre. And I’m giving them way more than what they need to know. They only need a cryptographic certificate with an expiration date, and I’m giving them when I got my shot or whether I’ve been naturally infected. I can already see insurance companies buying data from security companies.
You can do whatever the hell you want with that piece. It’s your medical history.
There’s a significant difference between the US and the EU here, that is worth emphasising. In the US, your personal information, (such as your medical history) is kind of your property. You can give it or sell it and all sorts of things. In the EU however your personal information is a part of you, and as such is less alienable than your property. I personally align with the EU more that the US on this one, because things that describes you can be used to influence, manipulate, and in some case persecute you.
If you want to show it to someone, you can. If you don’t want to show it to someone, you can.
Do I really have that choice? Can I really chose not to show my medical history if it means not showing up at the theatre or any form of crowded entertainment ever? Here’s another one: could you actually chose not to carry a tracking device with you nearly at all times? Can you live with the consequences of no longer owning a cell phone?
If you carry a tracking device with you at all times, why do you care about sharing your vaccination status? And why should someone medically unable to be vaccinated care about your privacy when their life is at risk?
As someone who’s father is immunocompromised, and with a dear friend who could not receive the vaccine due to a blood disease, fuck off. People have died.
Since you’re forcing my hand, know that I received my first injection not long ago, and have my appointment for the second one. Since I have good health, I don’t mind sharing too much.
What I do mind is that your father and dear friend have to share their information. Your father will likely need more than 2 injections. If it’s written, we can suspect immunocompromission. Your friend will be exempt. If it’s written, we can suspect some illness. That makes them vulnerable, and I don’t want that. They may not want that.
Now let’s say we do need that certificate. Because yes, I am willing to give up a sliver of liberty for the health of us all. The certificate only needs 3 things:
Information that can be linked to your ID (some number, your name…)
An expiration date.
A cryptographic certificate from the government.
That’s it. People reading the QR-code can automatically know whether you’re clear or not, and they don’t need to know why.
If you carry a tracking device with you at all times, why do you care about sharing your vaccination status?
I do not carry that device by choice. The social expectation that people can call me at any time is too strong. I’m as hooked as any junkie now.
I am willing to give up a sliver of liberty for the health of us all.
I appreciate your willingness, your previous comments made me think you weren’t. I apologize for my hostility. I think we can agree we should strive to uphold privacy to the utmost, but not at the expense of lives.
That’s it. People reading the QR-code can automatically know whether you’re clear or not, and they don’t need to know why.
That’s true, and that system would be more secure. But the additional detail could provide utility that outweighs that concern.
I can already see insurance companies buying data from security companies.
Insurance companies already have access to your medical history in the US. Equitable health care is an ongoing struggle here. ¯\_(ツ)_/¯
Edit: I removed parts about US law that could be incorrect, as IANAL.
HIPAA states PHI (personal health information) cannot be viewed by anyone without a need to know that information, and information systems should never even allow unauthorized persons to view that information in the first place. Device or software that displayed PHI to a movie theatre clerk would never go to market because it would never pass HIPAA compliance.
Damn it, no, this is incredibly wrong.
HIPAA applies to covered entities and business associates only. Covered entities are health care providers, insurance plans, and clearinghouses/HIEs. Business associates are companies that provide services to covered entities – so if you are an independent medical coder that reads doctor notes and assigns ICD10 codes, you’re covered because you provide services to a covered entity. How do you know if you’re a business associate? You’ve signed a BAA.
Movie theaters are not covered entities, and are not business associates. HIPAA has zero bearing on what they do. Your movie theater clerk could absolutely mandate you share your vaccination status – just like your doughnut vendor can ask in exchange for a free doughnut.
Your movie theater clerk could absolutely mandate you share your vaccination status
Yeah. As the movie theater is private property, and “unvaccinated” isn’t a protected group, they are allowed to discriminate all they want.
But I admit I am surprised they’d legally be able to store and sell your medical records. It seems you’re correct, and I had incorrectly generalized my experience and knowledge dealing with other covered entities all day to non-covered entities. A classic blunder of a programmer speaking about law, whoops. I’ve cut those statements from my prior comment.
I still don’t think that vaccination information would be any news to insurance companies, but I’m yet again disappointed by US privacy law.
Yeah. As the movie theater is private property, and “unvaccinated” isn’t a protected group, they are allowed to discriminate all they want.
It is conceivable you could make an ADA argument here – “I can’t get a COVID vaccination due to a medical condition; therefore, you need to provide a reasonable accommodation to me”. But that’s maybe a stretch, I’m not sure.
But I admit I am surprised they’d legally be able to store and sell your medical records
I think a lot of this comes down to training about HIPAA. If you’re in-scope for HIPAA, many places (rightfully) treat PHI as radioactive and communicate that to employees. And there’s very little risk in overstating the risk around mishandling PHI - it’s far safer to overmessage the dangers to people who work with it.
Indeed, until I needed to get involved on the compliance side – after all, somebody has to quote HITRUST controls for RFPs – I overfit HIPAA as well.
I’m yet again disappointed by US privacy law.
If you want to feel marginally better, go read up on 42 CFR Part 2. It still only applies to covered entities but it offers real, meaningful protections to an especially vulnerable population: people seeking treatment for substance use disorder. It also makes restrictions around HIPAA data handling look trivial.
But the additional detail could provide utility that outweighs that concern.
Possibly. That would need to be studied and justified, I believe.
Furthermore any reader of these QR codes should only return a pass/fail result, […]
Actually that’s what I expect from official programs, including in France. The problem is the QR code itself: any program can read it, and it’s too easy (and therefore tempting) to write or use a program that displays (or record!) everything.
HIPAA laws are some of the few here that have teeth
Hmm, that less horrible than I thought then. Glad to hear it.
Hmm, that less horrible than I thought then. Glad to hear it.
As @owen points out, IANAL and these laws don’t apply in this circumstance. I still don’t think that vaccination information would be any news to insurance companies, but I’m yet again disappointed by US privacy law.
Just for fun, the contents of the EU covid cert have a much more concise-looking schema than the US one (less XML-y deep structure and magic URLs). And the European container seems to be CBOR + Base45 vs. the US one JSON base64’d then run through a transform that doubles byte count turning everything into decimal digits. Both use gzip. (Ed: turns out QR codes have a numeric encoding that makes three decimal digits only take ten bits, so the US way is transmitting 6 bits in 6 and 2/3 bits on average, ~90% efficient. And Base45 gets 16 bits in three 5.5-bit chars, ~97% efficient. Now it all makes more sense!)
Interesting that both versions seem to fit in that size QR code (must just be able to hold a lot); I’d’ve thought even with gzip, everything in the US structure would be a tight fit.
Note that what the US one is using is a standardised interoperable healthcare format called FHIR.
The json representation looks pretty verbose, but handles many things you’d forget when coming up with your own format to represent healthcare data.
Just look at the FHIR R4 definition for HumanName in context of Patient
name HumanName 0..*: A person may have 0, 1, or more names
For each HumanName:
use {usual, temporary, official, nickname, maiden, ...}: The context of this HumanName; does this person use it as a nickname, is it the person’s maiden name, …
family string 0..1: May or may not have a family name
given string 0..*: 0 or more given names (usually surname)
period: 0..1 Period: The time period this name was/is/will be used
And this is just a small extract from just the HumanName data type. FHIR also has a system to manage logical IDs as well as external IDs (i.e. if a Patient is tracked in different databases in a hospital), support for various code systems used in healthcare (ICD-10, CPT, …), the most complex/complete system to handle temporal information I’ve seen, a super-integrated extension mechanism, …
The whole documentation, data schema definition and basically everything is also completely machine-readable.
It’s very complex, but I recommend everyone who does some sort of data modelling to take a look at some of the concepts. It’s a great inspiration.
Source: I’ve been working with FHIR for a few years now :-)
I always loved how Stripe’s REST API handles opaque IDs as a way to prevent confusion. While the Backwards-compatible changes documentation calls out “adding or removing fixed prefixes” as a backwards-compatible change, you’ll notice opaque IDs generated by Stripe usually include a short, human-readable prefix describing the ID. Some examples:
Publishable API key: pk_test_TYooMQauvdEDq54NiTphI7jx
You’re not meant to rely on these within your own code (I think some of the other suggestions in this post around strict type systems are far more applicable in that case), but they’re brilliant sanity-checks while running through a debugger’s stack view to make sure you’ve not accidentally referenced the wrong variable. Doubly so since Stripe’s documentation provides examples of the fixed prefixes for their API responses.
This is nice, and probably works well with the more “dynamic” languages used on the web. I wonder if they use this representation in the database as well, or if this is somehow “decoded” somewhere, and if it is, what they use as an internal representation.
Ironically, I learnt about ECH because our corporate Palo alto firewall blocked it by default.
There’s one thing I fail to understand however. Cloudflare says they’ll treat any request with SNI “cloudflare-ech.com” as ECH, but how is the client supposed to send that SNI in the first place ? If I want to reach “randombits.tld”, how do I know that I must use cloudflare’s ECH as the outer SNI ? Is there some magic DNS trick that’s not mentioned in the docs ?
Was wondering the same… there’s an older Cloudflare blog from 2020 that notes reliance on a HTTPS Resource Record for ECH configuration. Their developer docs also suggests corporate networks can break ECH by manipulating/dropping these DNS records.
It seems the structure for this is defined at https://datatracker.ietf.org/doc/html/draft-ietf-tls-esni-16#section-4, specifically
public_name
for the outer SNI value.Appreciate human-to-server auth is the primary focus here, not server-to-server, but I figured it’d be worth noting OIDC tends to be the preferred mechanism (e.g., GitHub or GitLab)- particularly after Travis CI had a breach impacting secrets for OSS repos in ’21 and CircleCI had a breach impacting all secrets in January
–
Passkeys?
edit: ah, /u/yawaramin also noted Passkeys while I was typing up my comment :)
Does anyone have an explanation of what this means? If I understand correctly (guessing at many of the words, from context), this is an IRC host, disabling a thing that lets you transparently project IRC channels as if they were Matrix things and instead requiring you to explicitly configure this? Presumably this is because a lot of people were able to send spam via Matrix to Matrix to IRC and the IRC server had no recourse other than to ban everyone on the closest Matrix hop?
I think your summary is pretty close (Matrix effectively operates an IRC bouncer for portalled rooms)- there’s also a post from Matrix themselves at https://matrix.org/blog/2023/07/deportalling-libera-chat/ which goes into more detail
That post by Matrix is really really good, a masterclass in honest and respectful communication. Neil Johnson, your writing was a joy to read.
Really appreciate the author’s thinking here around psychological safety empowering teams to make decisions that are flexible to change in the future- it reminds me of the practices described in https://kind.engineering/
Since a large part of your critique is focused on signatures signed by outdated keys, it occurs to me that implies that a secure use of public signatures would be to remember all the signatures you’ve made, and periodically update them, even if nothing about the software has changed.
I’m not sure that substituting minisign/ssh whatever the preferred signature tool du jour would make a difference in this regard; this is a shortcoming of build infrastructure.
I understand this is part of the reasoning behind Rekor within Sigstore- a compromised key (due to old algos or leaks) shouldn’t be capable of creating unwanted signatures without being easily detectable.
Admittedly, Sigstore’s Fulcio only issuing keys valid for 10min means meaningful key compromise is far less likely than using long-lived PGP/SSH/minisign keys (you’d hopefully not request a certificate with an algorithm weak enough to be crackable within 10min anyway ^^;).
I think it does? According to this blogpost, ssh will refuse SSHFP entries that are not signed..
Personally, I think SSHFP is a better way to solve the TOFU-problem with SSH, rather than requiring every SSH server to also run a webserver, and announcing their existence via the CT-log (a consequence of requesting a WebPKI certificate).
Hi! Thank you for the feedback. I agree that DNS would be the perfect place for host key fingerprints.
It is possible to configure resolv.conf so that ssh can get that bit of information. How often is that done? And that is not the only problem of DNSSEC.
imo CT logs being mandatory aren’t a bad thing, they allow you to be certain no one has created a certificate for your domain that would be accepted by browsers. You could avoid having particular servers/subdomains be identifiable by issuing each a wildcard certificate (LetsEncrypt even supports free wildcard certs through a DNS challenge).
With DNSSEC, it’s hard to evidence a registrar/the TLD operator hasn’t temporarily changed your DNSSEC keys without constant DNS monitoring. This is particularly worrying considering DNSSEC infrastructure is mostly controlled by world governments.
(my reasoning here is inspired by this masto post & the associated thread, linked to in a blog post linked by the original post.)
I’m not entirely sure I understand your point. You correctly describe why CT (“report all certificate issuances to Google”) is needed to keep WebPKI in check, but the question at hand is if it’s wise to use WebPKI for SSH, which it is not because it would require you to announce your SSH server before you’ve had a chance to set it up correctly.
I don’t know about you, but when I install a new box, I want to keep it off the internet (block incoming traffic except my own, keep it out of DNS) until I’ve set it up entirely. But I can’t do that anymore if it needs an incoming port 80 to the internet to do a little ACME song and dance to be recorded into CT before I can login to it, and I have just announced to the world that I just set up a new server, inviting everyone to start probing whether I did at least configure my firewall correctly.
Because now I suddenly have to run an internet facing webserver before I run SSH (or I need to somehow let this new machine write stuff in its DNS zone, which is hard without an MDM solution in my home lab), and if ACME fails (I didn’t set up DNS, the machine did too many attempts) I’m locked out until Let’s Encrypt lets me in into my own machine again. Not to mention how can you get a certificate for a mobile device, such as a laptop, that is in different networks and might thus not have a static name.
Or is all of this optional, because you can also login without the WebPKI bit? Then an attacker would simply need to block port 443 from your client to the SSH server, and you’re back to TOFU (which according to OP, isn’t good enough). An attacker may even be able to DoS you by dropping the traffic, making your client wait for a long timeout.
I was too focused on DNS in my previous reply. Did you note that the https server doesn’t have to be on the same host?
I missed that, yes. But how exactly does this work then?
curl https://host.domain.tld/.well-known/ssh/host.domain.tld
,curl https://domain.tld/.well-known/ssh/host.domain.tld
,curl https://tld/.well-known/ssh/host.domain.tld
?So for this “bubbling up” to make sense, you suddenly need to involve the public suffix list from Mozilla, to know where you need to stop (you could hardcode stopping at two elements, but then you would still attempt
https://co.uk/…
)And it still doesn’t solve the issue, how do you handle errors? When do you decide to abort, or bubble up? If the HTTPS connection times out? If it answers
404
? If it answers something else than200
or404
? How long do you wait for an answer? Is that timeout per server or for the whole process?As you can see, this solution has quite a lot of complexity connected to it when you think about it. Not to say there isn’t complexity in SSHFP+DNSSEC, I agree that DNSSEC is still a bit hard to set up, but that’s a problem with tooling, not the standard (it has been through multiple iterations simplifying it). As for the client setup, currently you may need to add
trust-ad
to your resolv.conf, but it doesn’t have to be like that.I don’t see how any telemetry transmitted via the internet that is opt-out is not a direct violation of the GDPR. The IP address that is transmitted with it (in the IP packets) is protected information that you don’t have consent to collect - you failed at step 0 and broke the law before you even received the bits you actually care about.
Of course, the GDPR seems to be going routinely unenforced except against the largest and most blatant violations, but I really don’t see why a company like google would risk it. Why other large companies are actively risking it.
My understanding of the GDPR was that IP addresses are not automatically PII. Even in situations where they are, simply receiving a connection from an IP address does not incur any responsibilities because you require the IP for technical reasons to maintain the connection. It’s only when you record the IP address that it may hit issues. You can generally use some fairly simple differential privacy features to manage this (e.g. drop one of the bytes from your log).
The EU has ruled that IP addresses are GDPR::PII, sadly.
There’s nothing sad about it. I bet that you think that your home address, ICBM coordinates, etc. are PII too.
Do you have a link to that ruling, I’d be very interested in reading it.
(emphasis mine. via the GDPR text, Regulation (EU) 2016/679)
fwiw- “PII” is a US-centric term that isn’t used within GDPR, which instead regulates “processing personal data”.
This doesn’t actually say that collecting IP addresses is not allowed. It only states that when the natural person is known, online identifiers could be used to create profiles.
Furthermore this is only relevant if those online identifiers are actually processed and stored. According to the Google proposal they are not. They only keep record of the anonymous counters. Which is 100% fine with GDPR.
(IANAL) I’d seen analytics software like Fathom and GoatCounter rely on (as you mention) anonymised counters to avoid creating profiles on natural persons, but also we’ve seen a court frown upon automatic usage of Google Fonts due to automatic transmission of IP addresses to servers in the US.
It’s a shame the go compiler isn’t well positioned UX-wise to ask users for opt-in consent at installation (as an IDE might) since that’d likely solve privacy concerns while reaching folk that don’t know about an opt-in config flag.
[admittedly, Google already receives IP addresses of Go users through https://proxy.golang.org/ anyway (which does log IP addresses, but “for [no] more than 30 days”) ¯\_(ツ)_/¯]
Yes IP addresses are not automatically PII, but if you can’t enforce they are not you must assume they are. The telemetry data itself is probably not PII, because it’s anonymized.
GDPR prohibits processing[0] of (private) data, but contains some exceptions. The most common used one is to full fill a contract (this doesn’t need to be a written down contract with payment). So assume you have an online shop. A user orders i.e. a printer you need his address to send the printer to him. But when the user orders a ebook you don’t need the address because you don’t need to ship the ebook. In the case of go the service would be compiling go code. I don’t see a technical requirement to send google your IP-Address.
Next common exception is some requirement by other law (i.e. tax-law or money laundering protection law). I think there is none.
Next one is user consents: You know these annoying cookie banner. Consents must be explicit and can’t be assumed (and dark pattern are prohibit). So this requires an opt-in.
Next one would be legitimate interest. This is more or less the log file exception. Here you might argue that the go team needs this data to improve there compiler. I don’t think this would stand, because other compiler work pretty well without telemetry.
So all together I[1] would say the only legal way to collect the telemetry data is some sort of user consent.
[0] Yes processing not only storing, so having a web server answering http requests might also falls under GDPR.
[1] I’m not a lawyer
You are wrong. The GDPR is not some magic checkbox that says “do not ever send telemetry”. The GDPR cares about PII and your IP address and a bunch of anonymous counters are simply not PII. There is nothing to enforce in this case.
If something is permitted by the law, it doesn’t automatically mean it’s also good
It’s a good thing that nobody’s arguing that, then.
Hah, you’re right, I must have mixed up two comments. Glad we all agree then :)
And what, exactly, is so wrong about MitM yourself, on your own network? Have we been so gaslit by “security specialists” that doing so on our own equipment is considered unthinkable? Or am I just an old man yelling at clouds?
There’s lots of research on the prevalence of people screwing up TLS interception like this (I recently looked for some so my team at work would have ammunition for refusing to do so on work laptops, which we manage).
That being said there’s a lot going for this approach:
t.co
, instead of generic TLS connections which significantly reduces attack surface (and complexity!).🤷 seems PROBABLY okayish even though it makes me sweat a little! Not that I am an expert.
On point #3, the Name Constraints Extension appears to be a good mitigation toward someone hijacking the root CA.
http://pkiglobe.org/name_constraints.html and https://www.sysadmins.lv/blog-en/x509-name-constraints-certificate-extension-all-you-should-know.aspx have interesting notes on how these constraints get applied to entity certificates by clients.
Unfortunately it seems some browsers only apply name constraints to intermediate CAs (https://bugs.chromium.org/p/chromium/issues/detail?id=1072083), so even this might not be a silver bullet.
Admittedly, if you’re only hijacking one hostname, you might as well self-sign an entity certificate for the target hostname and directly add it to your trust stores (without creating a self-signed CA).
Relatedly, I wish there was a way to add a CA to your trust base BUT only for certain specific domains and subdomains.
There’s a standard that exists for that. I was party toimplementations of that, but I don’t think it got much traction on the internet at large. The easiest mainstream way is to certify it using a root that you control and add name constraints, but for that to be secure (in a general way) you need to own both CAs.
I didn’t know CA name constraints are a thing. Thank you.
Hehe, I guess I was just preparing for the deluge of disapprovals so spent a good while explaining myself!
Agree. This is not a terrible solution and I don’t see why it wouldn’t be recommended. This is a great hack and I love it.
Not to say this is a bad solution, but what happens when your friends come over and ask to use your wifi? Presumably they haven’t installed your CA’s root cert. (Ignore for a moment the fact that obviously any TRUE friend would install their friend’s root cert.)
Anyway the benefits outweigh the downsides, but it’s something to think about.
A much better solution is to abolish
t.co
altogether, which is now a lot closer to happening than I would have dared to hope six months ago! I haven’t followed a t.co link in months, and with any luck never will again, but I understand others might not be so fortunate at this time.I operate an open wifi for friends to use, but that’s a fair point
So you would only do this interception on the private WiFi then?
This is a fair point.
For my situation I actually don’t have Adguard as the DNS resolver on my router, mainly because I’ve never been able to get it to work, so I just update the DNS manually on devices instead - so friends and family won’t be affected (but they wouldn’t be able to use this tool) unless they specifically set the DNS resolver on their phones.
It’s a fair point though, I guess creating an isolated VLAN/guest network for guests would be another way around this.
Another valid reply is that t.co is inherently sketchy as hell, and getting a warning when you’re accessing it isn’t necessarily a bad thing. (But it would be better if the warning were clearer about the specific problem.)
It seems fears around recovery/device migration are a significant part of the rationale behind Apple’s passkeys implementation requiring iCloud Keychain sync https://twitter.com/rmondello/status/1534914697123667969 (referencing https://developer.apple.com/forums/thread/707539)
n.b. Passkey is a generic term for FIDO/WebAuthn credentials, which PyPI’s 2FA supports in addition to TOTP. PyPI also require you to record a set of recovery codes and ask you to recite a code back during their 2FA setup process.
Speaking as someone who worked in the hosting biz and had to deal with this stuff, fears around recovery and device migration are all too legitimate. “I lost my 2FA” was one of my most-loathed support requests. Usually it was “I used the authenticator app on my old phone and forgot to migrate”.
As the article hints at, what makes MFA really viable is the hidden factor: human-to-human / human-to-organization relationships. Social relationships, not technical ones.
I’m also not comfortable with $bigtech_corp setting itself up as a trusted intermediary for the same reason. $bigtech_corp tends to be all about lack of accountability and destroying legitimate social relationships.
I have questions not answers, problems not solutions.
Or “my old phone is now toast and I forgot the authenticator was there and there goes all my access”
Thankfully I had my core device codes backed up, but some stuff I just had to write off to no longer having access to because there wasn’t a support team to engage.
I moved phones several years ago and had some but not all TFA codes migrate. Fortunately I noticed before I sent the old phone to recycling but jeez why was that a possible failure mode? All or none, ffs.
That recovery thing was my biggest concern when getting my old SE repaired and the upgrade to the 13. It went well though, but I always think about those things.
What is the interest of the EU to get involved in certificate issuance? Is it bureaucracy overreach or is there something else behind this effort?
My guess would be a genuine feeling that it’s not good for EU people that an American advertising company, an American browser vendor, an American computer company, and an American software company functionally control who’s allowed to issue acceptable certificates worldwide.
Sure, but then the answer is that the EU should make Mozilla open an office in Brussels or somewhere and then shovel money at FireFox, so that they have their own player in the browser wars. Tons of problems are created for users by the fact that Google and Apple have perverse incentives for their browser (and that Mozilla’s incentive is just to figure out some source, any source of funding). Funding Mozilla directly would give EU citizens a voice in the browser wars and provide an important counterbalance to the American browsers.
Directly funding a commercial entity tasked with competing with foreign commercial entities is a huge problem; Airbus and Boeing have had disputes about that for a long time: https://en.wikipedia.org/wiki/Competition_between_Airbus_and_Boeing#World_Trade_Organization_litigation
On the other side, passing laws that require compliance from foreign firms operating in the EU has been successful; for as much as it sucks and is annoying to both comply with and use websites that claim to comply with it, the GDPR has been mostly complied with.
A) In an EU context, it’s hard to argue that Aerobus hasn’t been successful for promoting European values. If the WTO disagrees, that’s because the WTO’s job is not to promote European values. I can’t really imagine how Google or Apple could win a lawsuit against the EU for funding a browser since they give their browsers away for free, but anyone can file a lawsuit about anything, I suppose.
B) I don’t see how anyone can spend all day clicking through pointless banners and argue that the current regulatory approach is successfully promoting EU values. The current approach sucks and is not successful. Arguably China did more to promote its Chinese values with Tiktok than all the cookie banners of the last six years have done for the EU’s goals.
None of this is about “promoting EU values.”
The EU government’s goal for Airbus is to take money from the rest of the world and put it in European paychecks.
The goal of the GDPR is to allow people in Europe a level of consent and control over how private surveillance systems watch them. The GDPR isn’t just the cookie banners; it’s the idea that you can get your shit out of facebook and get your shit off facebook, and that facebook will face consequences when it comes to light that they’ve fucked that up.
Google could absolutely come up with a lawsuit if the EU subsidizes Mozilla enough to let Mozilla disentangle from Google and start attacking Google’s business by implementing the same privacy features that Apple does.
Yes, and it’s a failure because everyone just clicks agree, since the don’t track me button is hidden.
That’s one answer, but what does it have to be “the” answer?
A trusted and secure European e-ID - Regulation, linked to in the article’s opening, is a revision of existing eIDAS regulation aiming to facilitate interoperable eID schemes in Member States. eIDAS is heavily reliant on X.509 (often through smartcards in national ID cards) to provide a cryptographic identity.
The EU’s interest in browser Certificate Authorities stems from the following objective in the draft regulation:
… to be implemented through a replacement to Article 45:
Mozilla’s November 2021 eIDAS Position Paper, also linked in the original article, goes into more detail about the incompatibilities with the ‘Qualified Website Authentication Certificates’ scheme and the CA/Browser Forum’s policies.
Well, the government are not joking. What happened to medical confidentiality?
Having to prove you have a vaccination has been a requirement in all manner of situations before this - like international travel.
I live in France, and a number of vaccines are already mandatory (for obvious public health reasons).
I’ve never had to present a proof of vaccination when I go to the theatre. Or Theme park. Or anywhere within my country for that matter. Even for international travel, didn’t need to give the USA such proof when I came to see the total solar eclipse in 2019. I’ve also never had to disclose the date of my vaccines, or any information about my health.
What you call “all manner of situation” is actually very narrow. This certificate is something new. A precedent.
This is why you’ve not been asked for proof for international travel, since it’s assumed that you’ll have received these immunisations or be unexposed through herd immunity as someone who resides in France.
We’re currently in a migration period where some people are immunised and others aren’t. We’ve had this happen before– the WHO is responsible for coordinating the Carte Jaune standard (first enforced on 1 August 1935) to aid with information sharing, but they haven’t extended it to include COVID-19 immunisation yet.
In a 1972 article, the NYTimes headlines “Travel Notes: Immunization Cards No Longer Needed for European Trips” regarding Smallpox immunisations.
Still, even today, immigrants applying to the United States for permanent residency remain required to present evidence of vaccinations recommended by the CDC: https://www.cdc.gov/immigrantrefugeehealth/laws-regs/vaccination-immigration/revised-vaccination-immigration-faq.html#whatvaccines
(Note: international travel is one use case where I believe it’s perfectly legitimate to ask for a evidence of vaccination. It’s the only way a country can make sure it won’t get some public health problems on its hand, which makes it a matter of sovereignty.)
It’s not the government that’s sharing this information. It’s you when you present that QR code. This is equivalent to your doctor printing out a piece of your medical records and handing it to you. You can do whatever the hell you want with that piece. It’s your medical history. If you want to show it to someone, you can. If you don’t want to show it to someone, you can. The government only issues the pass. Nothing more.
The QR code has a very important difference with a piece of paper one would look at: its contents are trivially recorded. A piece of paper on the other hand is quickly be forgotten.
No, this is equivalent to me printing out a piece of my medical record and handing it to the guard at the entrance of the theatre. And I’m giving them way more than what they need to know. They only need a cryptographic certificate with an expiration date, and I’m giving them when I got my shot or whether I’ve been naturally infected. I can already see insurance companies buying data from security companies.
There’s a significant difference between the US and the EU here, that is worth emphasising. In the US, your personal information, (such as your medical history) is kind of your property. You can give it or sell it and all sorts of things. In the EU however your personal information is a part of you, and as such is less alienable than your property. I personally align with the EU more that the US on this one, because things that describes you can be used to influence, manipulate, and in some case persecute you.
Do I really have that choice? Can I really chose not to show my medical history if it means not showing up at the theatre or any form of crowded entertainment ever? Here’s another one: could you actually chose not to carry a tracking device with you nearly at all times? Can you live with the consequences of no longer owning a cell phone?
If you carry a tracking device with you at all times, why do you care about sharing your vaccination status? And why should someone medically unable to be vaccinated care about your privacy when their life is at risk?
As someone who’s father is immunocompromised, and with a dear friend who could not receive the vaccine due to a blood disease, fuck off. People have died.
Since you’re forcing my hand, know that I received my first injection not long ago, and have my appointment for the second one. Since I have good health, I don’t mind sharing too much.
What I do mind is that your father and dear friend have to share their information. Your father will likely need more than 2 injections. If it’s written, we can suspect immunocompromission. Your friend will be exempt. If it’s written, we can suspect some illness. That makes them vulnerable, and I don’t want that. They may not want that.
Now let’s say we do need that certificate. Because yes, I am willing to give up a sliver of liberty for the health of us all. The certificate only needs 3 things:
That’s it. People reading the QR-code can automatically know whether you’re clear or not, and they don’t need to know why.
I do not carry that device by choice. The social expectation that people can call me at any time is too strong. I’m as hooked as any junkie now.
I appreciate your willingness, your previous comments made me think you weren’t. I apologize for my hostility. I think we can agree we should strive to uphold privacy to the utmost, but not at the expense of lives.
That’s true, and that system would be more secure. But the additional detail could provide utility that outweighs that concern.
Insurance companies already have access to your medical history in the US. Equitable health care is an ongoing struggle here. ¯\_(ツ)_/¯
Edit: I removed parts about US law that could be incorrect, as IANAL.
Deep breath, C-f HIP … sigh
Damn it, no, this is incredibly wrong.
HIPAA applies to covered entities and business associates only. Covered entities are health care providers, insurance plans, and clearinghouses/HIEs. Business associates are companies that provide services to covered entities – so if you are an independent medical coder that reads doctor notes and assigns ICD10 codes, you’re covered because you provide services to a covered entity. How do you know if you’re a business associate? You’ve signed a BAA.
Movie theaters are not covered entities, and are not business associates. HIPAA has zero bearing on what they do. Your movie theater clerk could absolutely mandate you share your vaccination status – just like your doughnut vendor can ask in exchange for a free doughnut.
Yeah. As the movie theater is private property, and “unvaccinated” isn’t a protected group, they are allowed to discriminate all they want.
But I admit I am surprised they’d legally be able to store and sell your medical records. It seems you’re correct, and I had incorrectly generalized my experience and knowledge dealing with other covered entities all day to non-covered entities. A classic blunder of a programmer speaking about law, whoops. I’ve cut those statements from my prior comment.
I still don’t think that vaccination information would be any news to insurance companies, but I’m yet again disappointed by US privacy law.
It is conceivable you could make an ADA argument here – “I can’t get a COVID vaccination due to a medical condition; therefore, you need to provide a reasonable accommodation to me”. But that’s maybe a stretch, I’m not sure.
I think a lot of this comes down to training about HIPAA. If you’re in-scope for HIPAA, many places (rightfully) treat PHI as radioactive and communicate that to employees. And there’s very little risk in overstating the risk around mishandling PHI - it’s far safer to overmessage the dangers to people who work with it.
Indeed, until I needed to get involved on the compliance side – after all, somebody has to quote HITRUST controls for RFPs – I overfit HIPAA as well.
If you want to feel marginally better, go read up on 42 CFR Part 2. It still only applies to covered entities but it offers real, meaningful protections to an especially vulnerable population: people seeking treatment for substance use disorder. It also makes restrictions around HIPAA data handling look trivial.
Possibly. That would need to be studied and justified, I believe.
Actually that’s what I expect from official programs, including in France. The problem is the QR code itself: any program can read it, and it’s too easy (and therefore tempting) to write or use a program that displays (or record!) everything.
Hmm, that less horrible than I thought then. Glad to hear it.
As @owen points out, IANAL and these laws don’t apply in this circumstance. I still don’t think that vaccination information would be any news to insurance companies, but I’m yet again disappointed by US privacy law.
It’s interesting the SMART Health Card standard implemented here is entirely incompatible with the Digital COVID Certificate standard (Interoperable 2D Code, pdf) being rolled out in the EU (and currently used for the digital NHS England COVID Pass).
Perhaps the IATA Travel Pass will be more successful as a unifying standard.
Just for fun, the contents of the EU covid cert have a much more concise-looking schema than the US one (less XML-y deep structure and magic URLs). And the European container seems to be CBOR + Base45 vs. the US one JSON base64’d then run through a transform that doubles byte count turning everything into decimal digits. Both use gzip. (Ed: turns out QR codes have a numeric encoding that makes three decimal digits only take ten bits, so the US way is transmitting 6 bits in 6 and 2/3 bits on average, ~90% efficient. And Base45 gets 16 bits in three 5.5-bit chars, ~97% efficient. Now it all makes more sense!)
Interesting that both versions seem to fit in that size QR code (must just be able to hold a lot); I’d’ve thought even with gzip, everything in the US structure would be a tight fit.
Note that what the US one is using is a standardised interoperable healthcare format called FHIR. The json representation looks pretty verbose, but handles many things you’d forget when coming up with your own format to represent healthcare data.
Just look at the FHIR R4 definition for
HumanName
in context ofPatient
name HumanName 0..*
: A person may have 0, 1, or more namesHumanName
:use {usual, temporary, official, nickname, maiden, ...}
: The context of thisHumanName
; does this person use it as a nickname, is it the person’s maiden name, …family string 0..1
: May or may not have a family namegiven string 0..*
: 0 or more given names (usually surname)period: 0..1 Period
: The time period this name was/is/will be usedAnd this is just a small extract from just the
HumanName
data type. FHIR also has a system to manage logical IDs as well as external IDs (i.e. if aPatient
is tracked in different databases in a hospital), support for various code systems used in healthcare (ICD-10, CPT, …), the most complex/complete system to handle temporal information I’ve seen, a super-integrated extension mechanism, …The whole documentation, data schema definition and basically everything is also completely machine-readable.
It’s very complex, but I recommend everyone who does some sort of data modelling to take a look at some of the concepts. It’s a great inspiration.
Source: I’ve been working with FHIR for a few years now :-)
I always loved how Stripe’s REST API handles opaque IDs as a way to prevent confusion. While the Backwards-compatible changes documentation calls out “adding or removing fixed prefixes” as a backwards-compatible change, you’ll notice opaque IDs generated by Stripe usually include a short, human-readable prefix describing the ID. Some examples:
pk_test_TYooMQauvdEDq54NiTphI7jx
sk_test_4eC39HqLyjWDarjtT1zdp7dc
ch_1IVMF02eZvKYlo2CyPTlPI5a
txn_1032HU2eZvKYlo2CEPtcnUvl
card_19yUNL2eZvKYlo2CNGsN6EWH
You’re not meant to rely on these within your own code (I think some of the other suggestions in this post around strict type systems are far more applicable in that case), but they’re brilliant sanity-checks while running through a debugger’s stack view to make sure you’ve not accidentally referenced the wrong variable. Doubly so since Stripe’s documentation provides examples of the fixed prefixes for their API responses.
This is nice, and probably works well with the more “dynamic” languages used on the web. I wonder if they use this representation in the database as well, or if this is somehow “decoded” somewhere, and if it is, what they use as an internal representation.