I’m not sure I understand the security problem. This is for pre-shared keys. In this model, both endpoints have to hold the same key. If you encrypt it then you need to have access to the key that encrypted it. If your threat model is an attacker with physical access to the device who can attack probes on the wire and grab things from the filesystem then even something like a TPM won’t help you for storing the key that encrypts the key because such an attacker can likely modify the firmware to ask it to extract the key. The only really secure way of doing this is to have a separate tamper-resistant chip with on-package non-volatile storage that also has the requisite crypto to do the handshake. I didn’t really pay attention to what WPA2 does, so a TPM might be able to do this, but it’s generally overkill. The same key will likely be unencrypted in the AP as well and anyone who has physical access to the Echo can probably get to the AP. If they want to spy on network traffic, adding a box between the AP and the wired network will probably be easier.
this is why you shouldn’t implement your crypto folks!
I am confused by this comment. If you are referring to Amazon, then it seems they didn’t implement much, if any cryptography and used an off-the-shelf component (wpa_supplicant). If you are referring to the article’s author, then I sort of get it because the article has several issues that tell me they shouldn’t be near any real cryptography system without a lot more training and experience. The author suggests hashing the wifi password, which doesn’t make sense since the password needs to be usable for the WPA protocol. You can hash and pre-compute the key, but that is still usable to connect to the wifi. They conflate that with password storage in a system that is a password validator, not the client. Then they added a note suggesting perhaps encrypting it in a proprietary format (which screams Kerckhoffs’s principle violation) and lamented that it would still be decryptable. Ultimately this device has to be able to boot unattended and connect to wifi, so their options for defense are limited.
To me, this article seems to lack a discussion of the threat model. This is an embedded device with limited resources. I have no doubt the wifi password is recoverable if you crack it open.
I had similar takeaways on my reading, but keep in mind the author is 14 years old.
Honestly the bit about exfiltrating the Spotify API key by shorting out a capacitor during boot is pretty impressive on its own.
Certainly impressive at that age. It does reiterate my point about needing more training and experience to work with cryptographic systems.
So … where do new crypto experts come from then, if no one should implement crypto?
They roll their own crypto for fun and to learn, but don’t deploy it in production. They show it to experienced cryptographers to learn what they did wrong (notes experienced cryptographers don’t ever stop doing this). They go through a lot of rounds of peer review for their work until it is accepted as being probably not wrong. They do this huge amount of work so that people like me can avoid rolling our own crypto, because the effort involved in doing it right is way more then is worthwhile for any single project.
I think you misunderstand the “Don’t roll your own crypto”. This doesn’t say don’t build your own crypto. But you have to keep in mind, that it’s probable completely insecure. So if you have build your own crypto don’t use it. If you are lucky someone will look at your crypto and explain you the problems with it.
To practical start with crypto you can also look at some known bugs and try to exploit them. I think there is an online curse for this.
This issue isn’t about rolling their own crypto. Good secrets management is a hard problem that sometimes pulls in applied cryptographers.