(Not directly related to the point of the post but) this made me think of the latest episode of Developer Voices podcast, where Kris interviews someone who builds a service for smart TV application developers to test their builds on real TVs/set-top boxes. One of the bits of tech they’ve made to do this is a very high-quality video stream of what the TV is showing, by using 4 cameras pointing at quadrants of the screen, stitched together, colour and perspective-corrected. I just thought it was rather amusing that the wider industry has had to essentially make a high-quality piracy device in order to work around the DRM required by another part of the industry.
I really don’t understand this DRM stuff. It clearly doesn’t work, as high resolution copyrighted material is freely available on pirate sites often on the very day it’s released.
If it worked, how is this possible? Or, put differently, assuming it works, how do the release teams work around DRM?
It’s not about preventing piracy, it’s about controlling the distribution chain, and the people being scammed are the copyright holders.
Tech companies convinced media companies that they needed DRM to ‘prevent piracy’. Apple was initially the most successful. They convinced three of the big four music labels to let them distribute their music as pure data files, with no associated physical medium, and promised that their DRM would prevent piracy. Apple controlled the DRM scheme and so no one could create a competing music store that let people put their purchased music on the iPod and, since the iPod was the most popular mobile player, this gave Apple a near monopoly on the Internet distribution chain for music. That was great for Apple. Less good for the music labels and, somewhat shockingly, they noticed and allowed DRM-free downloads from other music stores. As I recall, piracy actually decreased after this (shockingly, if you sell people the product they want at a sensible price, they will buy it).
The movie industry didn’t learn from this. A load of tech companies wanted to replicate the success of the Apple experiment and kept telling movie studios how important DRM was. Netflix got various SmartTV vendors to add a Netflix app. You want to stream to them after they stop getting updates from the vendor? Sorry, has to be via Netflix. The DRM isn’t there to prevent copying, it’s to give Netflix control over the distribution path that ends at the TV.
Meanwhile, one person extracts a BluRay decryption key from some device or software player and dumps the high-definition video from a $20 disk and then distributes it. You only need one BluRay key to get any movie released before that key was revoked (and if it’s from a player that doesn’t have a secure update mechanism, studios may avoid revoking it to avoid complaints that their movies don’t play). Or, in some cases, people dump the cinema streams (modern cinemas are digital, they get movies distributed via satellite, encrypted with a per-studio key). This is a bit more risky because some studios have started watermarking the cinema streams so they can tell which cinema was responsible for the leak.
People probably could break Netflix DRM, but there’s little point when there are much easier paths. None of this matters because the point of the DRM isn’t to prevent piracy.
People probably could break Netflix DRM, but there’s little point when there are much easier paths. None of this matters because the point of the DRM isn’t to prevent piracy.
They do break it. Netflix originals that are only available on Netflix are available in 2160p from piracy websites. While looking, I only found one show not available in 2160p. Missing You, which was released yesterday, is currently only available in 1080p. Presumably the release groups that can crack the DRM used for 4k will upload that show shortly.
Tech companies convinced media companies that they needed DRM to ‘prevent piracy’. Apple was initially the most successful. They convinced three of the big four music labels to let them distribute their music as pure data files, with no associated physical medium, and promised that their DRM would prevent piracy.
With the caveat that I wasn’t in the room for the negotiations, that’s not how I remember things proceeding. The big music labels were apopleptic about Napster. Then, after they shut that down using the court system, about Gnutella, then about its offshoots (once they got Justin Frankel’s corporate overlords to quash Nullsoft’s Gnutella implementation).
Their music was being distributed as digital-only files, with no DRM, and they saw that as an existential threat, before well before Apple got involved. Apple didn’t convince them they needed DRM. They demanded it. Hell, in 2001 and 2002, 2 years before iTunes Music Store, Sony and BMG were putting DRM (that eventually evolved into the “DRM Rootkit” scandal which was later resolved by class action judgements and recalls) onto retail audio CDs.
They told Apple they needed the DRM, and Apple had the good sense not to argue (much) with them.
In 2000-2003, even my least technically inclined friends knew how to download music from Napster while it was still available, how to use Aimster while that was still available, or how to use Limewire when everyone moved to that. It wasn’t a great experience, though. It was really only better than hauling yourself to a retail store, purchasing an entire CD to get that one song you wanted, and ripping it to get it onto your iPod or your Rio, recording it to a mix tape/cd, etc. Even though you had to go through 4 garbage, time consuming downloads most of the time to get a good copy of one song. And if you ran Windows, you had a serious chance of picking up BonziBuddy and 5 similar turds to eat your CPU cycles and show you ads.
What Apple offered was instant gratification at predictably decent quality that was faster and easier than the bootleg networks, for $0.99/track or $7 - $10 per album. The fact that the labels insisted on vendor lock-in via DRM was a bonus for Apple. The labels figured out that they’d swindled themselves once anyone with a compatible device (or a CD burner) made iTMS their first resort, and Microsoft started negotiating, then Amazon, etc.
Once the labels got smart and let Amazon sell un-DRM’d (but with identifying metadata in ID3 tags) MP3s, Apple got the same permission for iTunes tracks and even rolled out lossless encoding in much of the store.
I think we broadly agree violently, though… convenient, single-track purchases drastically reduced piracy. The only thing DRM had to do with it was that it was required by the labels before they’d try the experiment.
(These are just my memories of the space at the time. My only direct involvement with anything adjacent was convincing my company’s management that it was a bad idea to work on a FairPlay competitor at around that time.)
Steve Jobs was pretty anti DRM, but he was in a minority. The rest of the industry was salivating at the chance of using someone else’s monopoly to gain one of their own.
Microsoft was pushing the Windows Media DRM scheme (and, because they’re Microsoft, the other totally incompatible DRM scheme that the Zune used). The industry’s post-Napster fear was the marketing wedge, but this wasn’t about solving problems for customers it was about controlling the distribution chain. If you adopted Windows DRM, users couldn’t play things on OS X or Linux, so Microsoft’s monopoly got another lever. Silverlight later played part of this. I stopped paying the TV License in the UK (I hadn’t had a TV for a while so didn’t need to, but I wanted to support the BBC) when they launched iPlayer with Silverlight and it was a Window-only thing.
The popularity of mobile platforms and the failure of Windows Mobile and Zune disrupted Microsoft’s plans. As downloads were displaced with streaming, the underlying DRM scheme mattered less than the app that wrapped it. Control this and you become the gatekeeper.
The movie industry half learned the lessons from music: they started to build their own distribution channels (most of the new streaming platforms are operated by the studios) but they missed the key part: iTMS was popular because it was a single place to buy any popular recording. The same thing happened with Amazon. Fragmenting the ecosystem into dozens of streaming providers makes each one individually less valuable.
This is my recollection as well. The copyright mafia salivated at the thought of pervasive DRM - each and every track listening would be part of a revenue stream! I think tech people tried to push back - at the time, you had to distribute the secret key to the media stream to devices for technical reasons, and they knew it was gonna get cracked - but the dream of being able to get even more money was too strong.
. Or, in some cases, people dump the cinema streams (modern cinemas are digital, they get movies distributed via satellite, encrypted with a per-studio key)
Now most movies are first shared, in acceptable quality, by cracking the DRM inside netflix/amazon/disney etc. apps itself wide open. Small grey-market industry even appeared around it.
As other replies have indicated, the article only covers half of the DRM equation: the other part happens between the GPU and the screen, usually with HDCP. Several version of HDCP have been released with increased security hardening, but the need to be retro-compatible with old TVs that don’t receive software upgrades is its Achilles’ heel.
HDCP 1.4 to 2.1 are now trivial to break, but content publishers cannot forbid these versions for 1080p streams without “breaking” a significant amount of FHD TVs. 4k content usually require HDCP 2.2+, which closes many of the existing exploits, but several scene groups releasing 4k rips indicates that it has been broken too.
The weakest point of the chain is currently HDCP but if it were to become “unbreakable”, people will switch their attention to the other parts of the chain: hack Widevine or the GPU drivers, find vulnerabilities in a TV SOC that allows to extract the undecoded stream…
Some of this is hilarious. I have an analogue 5.1 surround-sound system. When I got an Xbox One, I needed a S/PDIF adaptor to decode DTS and provide the analogue output (everything prior to that had been able to generate the analog output directly). Then the Series X removed S/PDIF and so I needed to extract the audio stream. I bought a small (and cheap) box that had HDMI in and out and a bunch of DIP switches that would specify the formats that the HDMI channel advertised that it supported (S/PDIF doesn’t have content negotiation, the sender picks a format and the receiver has to be able to handle it, HDMI lets the receiver specify the formats it supports and the sender pick the best one that it can do). To do this, it has to decrypt the HDMI stream. There’s another DIP switch that you can set to specify whether it then reencrypts it. If you set this in one position, you have an end-to-end secure path for video. In the other, the video is plaintext between the box and the screen. Putting it in the encrypt mode consumes power and adds latency. There is no reason that anyone would ever run it in that mode, it exists solely so that the manufacturers can claim that it isn’t intended for piracy (I’m not using it to pirate the video streams, I just want to be able to get audio out of my console, but it does provide digital audio and video streams, so you could easily connect them to a hardware compressor and get a movie file out).
This is also possible and is done but is not the preferred mechanism as it requires re-encoding the video which reduces quality. Most files are available in the original encoding with DRM stripped.
It doesn’t need to be perfect, it just needs to make it hard enough for the layman to be unable to do it. In the olden days, average people were sharing VHS tapes with things they recorded. Now you have to be someone like us to download whatever movies you want.
Tangential but funny when it comes to GPU and DRM from my favourite(?) talk of 38c3 - https://app.media.ccc.de/v/38c3-the-master-key . I fondly recall the coffee-snort the day the keys just appeared on pastebin, thinking it was an insider or supply-chain job.
The cryptography engineering fail itself makes it up there in the PS3 ‘randomly chosen by fair dice roll’ approach to key generation - specifically how the secure mode transition by first checking the calculated MAC in the secure enclave, and leaving the correct value in non-secure readable registers when returning. Swap registers, try again and success!
This has an odd focus on preventing piracy of media; my concern is being able to run whatever OS and application software I want, with whatever patches I want. Additionally, as long as I correctly implement the network protocols, I would rather like to be able to interact with the rest of the world.
As part of their “Defective by Design” anti-DRM campaign, the FSF recently made the following claim:
Today, most of the major streaming media platforms utilize the TPM to decrypt media streams, forcefully placing the decryption out of the user’s control
So this post is a response to the FSF’s comment, and it was the FSF’s choice to put the focus there.
x86 has no well-specified TEE (Intel’s SGX is an example, but is no longer implemented in consumer parts)
Interesting that Intel isn’t deploying these in consumer chips anymore, though I guess it makes sense since long BIOS update timelines make the DRM usecase very unattractive.
only Playready does hardware DRM on x86
How did high-definition content DRM ever work on Intel Macs if this is true?
SGX had very strong security guarantees. It used a Merkel tree over memory, with the root in cache, so you couldn’t do things like replay attacks even if you had malicious DRAM chips. This limited the amount of memory it could support (64 MiB originally, later ones almost doubled this).
That made it a lot of hardware complexity for some fairly niche use cases. The main non-DRM use case was storing hard disk encryption keys in the enclave, so you could do full-disk encryption with the CPU’s AES acceleration but not leak the keys to the kernel. For most threat models, this wasn’t much better than having the keys in a separate VM.
Then Spectre came along. It turned out to be incredibly hard to secure SGX against transient execution attacks. At the same time, the thing big customers (Google, Amazon, Microsoft) actually wanted was the ability to run customers’ VMs or containers with strong attestable guarantees that the cloud provider couldn’t see or tamper with the contents. We actually did build a thing for running Linux containers in SGX on Azure but it was not good and never made it to a shipping product for various reasons. The SGX programming model is very hard to secure against this kind of attack and so it proved to be a dead-end technology.
TDX is intended as the replacement and is built around VMs. You can create a VM that has shared (insecure) memory with a userspace process to emulate the SGX programming model, but no one really wants this. VM exits are fairly rare now, so doing some aggressive flushing on each transition hurts less than on SGX. TDX is not good, but it’s solving a problem that is at least adjacent to the ones customers want (Arm CCA is actually solving the right problem).
Alas, sgx is Swiss cheese. Keys have been extracted from it quite easily. I think that’s part of the reason why they got rid of it because it’s so defective.
I wonder how difficult it would be to get access to whatever region of memory contains the decrypted video content? I suppose this would require hardware support and there probably aren’t good use cases that would make said hardware economically viable beyond piracy? Maybe you just read it off the HDMI port?
(Not directly related to the point of the post but) this made me think of the latest episode of Developer Voices podcast, where Kris interviews someone who builds a service for smart TV application developers to test their builds on real TVs/set-top boxes. One of the bits of tech they’ve made to do this is a very high-quality video stream of what the TV is showing, by using 4 cameras pointing at quadrants of the screen, stitched together, colour and perspective-corrected. I just thought it was rather amusing that the wider industry has had to essentially make a high-quality piracy device in order to work around the DRM required by another part of the industry.
I really don’t understand this DRM stuff. It clearly doesn’t work, as high resolution copyrighted material is freely available on pirate sites often on the very day it’s released.
If it worked, how is this possible? Or, put differently, assuming it works, how do the release teams work around DRM?
It’s not about preventing piracy, it’s about controlling the distribution chain, and the people being scammed are the copyright holders.
Tech companies convinced media companies that they needed DRM to ‘prevent piracy’. Apple was initially the most successful. They convinced three of the big four music labels to let them distribute their music as pure data files, with no associated physical medium, and promised that their DRM would prevent piracy. Apple controlled the DRM scheme and so no one could create a competing music store that let people put their purchased music on the iPod and, since the iPod was the most popular mobile player, this gave Apple a near monopoly on the Internet distribution chain for music. That was great for Apple. Less good for the music labels and, somewhat shockingly, they noticed and allowed DRM-free downloads from other music stores. As I recall, piracy actually decreased after this (shockingly, if you sell people the product they want at a sensible price, they will buy it).
The movie industry didn’t learn from this. A load of tech companies wanted to replicate the success of the Apple experiment and kept telling movie studios how important DRM was. Netflix got various SmartTV vendors to add a Netflix app. You want to stream to them after they stop getting updates from the vendor? Sorry, has to be via Netflix. The DRM isn’t there to prevent copying, it’s to give Netflix control over the distribution path that ends at the TV.
Meanwhile, one person extracts a BluRay decryption key from some device or software player and dumps the high-definition video from a $20 disk and then distributes it. You only need one BluRay key to get any movie released before that key was revoked (and if it’s from a player that doesn’t have a secure update mechanism, studios may avoid revoking it to avoid complaints that their movies don’t play). Or, in some cases, people dump the cinema streams (modern cinemas are digital, they get movies distributed via satellite, encrypted with a per-studio key). This is a bit more risky because some studios have started watermarking the cinema streams so they can tell which cinema was responsible for the leak.
People probably could break Netflix DRM, but there’s little point when there are much easier paths. None of this matters because the point of the DRM isn’t to prevent piracy.
They do break it. Netflix originals that are only available on Netflix are available in 2160p from piracy websites. While looking, I only found one show not available in 2160p. Missing You, which was released yesterday, is currently only available in 1080p. Presumably the release groups that can crack the DRM used for 4k will upload that show shortly.
With the caveat that I wasn’t in the room for the negotiations, that’s not how I remember things proceeding. The big music labels were apopleptic about Napster. Then, after they shut that down using the court system, about Gnutella, then about its offshoots (once they got Justin Frankel’s corporate overlords to quash Nullsoft’s Gnutella implementation).
Their music was being distributed as digital-only files, with no DRM, and they saw that as an existential threat, before well before Apple got involved. Apple didn’t convince them they needed DRM. They demanded it. Hell, in 2001 and 2002, 2 years before iTunes Music Store, Sony and BMG were putting DRM (that eventually evolved into the “DRM Rootkit” scandal which was later resolved by class action judgements and recalls) onto retail audio CDs.
They told Apple they needed the DRM, and Apple had the good sense not to argue (much) with them.
In 2000-2003, even my least technically inclined friends knew how to download music from Napster while it was still available, how to use Aimster while that was still available, or how to use Limewire when everyone moved to that. It wasn’t a great experience, though. It was really only better than hauling yourself to a retail store, purchasing an entire CD to get that one song you wanted, and ripping it to get it onto your iPod or your Rio, recording it to a mix tape/cd, etc. Even though you had to go through 4 garbage, time consuming downloads most of the time to get a good copy of one song. And if you ran Windows, you had a serious chance of picking up BonziBuddy and 5 similar turds to eat your CPU cycles and show you ads.
What Apple offered was instant gratification at predictably decent quality that was faster and easier than the bootleg networks, for $0.99/track or $7 - $10 per album. The fact that the labels insisted on vendor lock-in via DRM was a bonus for Apple. The labels figured out that they’d swindled themselves once anyone with a compatible device (or a CD burner) made iTMS their first resort, and Microsoft started negotiating, then Amazon, etc.
Once the labels got smart and let Amazon sell un-DRM’d (but with identifying metadata in ID3 tags) MP3s, Apple got the same permission for iTunes tracks and even rolled out lossless encoding in much of the store.
I think we broadly agree violently, though… convenient, single-track purchases drastically reduced piracy. The only thing DRM had to do with it was that it was required by the labels before they’d try the experiment.
(These are just my memories of the space at the time. My only direct involvement with anything adjacent was convincing my company’s management that it was a bad idea to work on a FairPlay competitor at around that time.)
Steve Jobs was pretty anti DRM, but he was in a minority. The rest of the industry was salivating at the chance of using someone else’s monopoly to gain one of their own.
Microsoft was pushing the Windows Media DRM scheme (and, because they’re Microsoft, the other totally incompatible DRM scheme that the Zune used). The industry’s post-Napster fear was the marketing wedge, but this wasn’t about solving problems for customers it was about controlling the distribution chain. If you adopted Windows DRM, users couldn’t play things on OS X or Linux, so Microsoft’s monopoly got another lever. Silverlight later played part of this. I stopped paying the TV License in the UK (I hadn’t had a TV for a while so didn’t need to, but I wanted to support the BBC) when they launched iPlayer with Silverlight and it was a Window-only thing.
The popularity of mobile platforms and the failure of Windows Mobile and Zune disrupted Microsoft’s plans. As downloads were displaced with streaming, the underlying DRM scheme mattered less than the app that wrapped it. Control this and you become the gatekeeper.
The movie industry half learned the lessons from music: they started to build their own distribution channels (most of the new streaming platforms are operated by the studios) but they missed the key part: iTMS was popular because it was a single place to buy any popular recording. The same thing happened with Amazon. Fragmenting the ecosystem into dozens of streaming providers makes each one individually less valuable.
This is my recollection as well. The copyright mafia salivated at the thought of pervasive DRM - each and every track listening would be part of a revenue stream! I think tech people tried to push back - at the time, you had to distribute the secret key to the media stream to devices for technical reasons, and they knew it was gonna get cracked - but the dream of being able to get even more money was too strong.
It’s worse than that. Even the new keys are compromised in Blu-ray. They actually got the base key and factored it. So even new keys are crackable.
Now most movies are first shared, in acceptable quality, by cracking the DRM inside netflix/amazon/disney etc. apps itself wide open. Small grey-market industry even appeared around it.
That’s exactly what’s happening.
As other replies have indicated, the article only covers half of the DRM equation: the other part happens between the GPU and the screen, usually with HDCP. Several version of HDCP have been released with increased security hardening, but the need to be retro-compatible with old TVs that don’t receive software upgrades is its Achilles’ heel.
HDCP 1.4 to 2.1 are now trivial to break, but content publishers cannot forbid these versions for 1080p streams without “breaking” a significant amount of FHD TVs. 4k content usually require HDCP 2.2+, which closes many of the existing exploits, but several scene groups releasing 4k rips indicates that it has been broken too.
The weakest point of the chain is currently HDCP but if it were to become “unbreakable”, people will switch their attention to the other parts of the chain: hack Widevine or the GPU drivers, find vulnerabilities in a TV SOC that allows to extract the undecoded stream…
Widevine / Chrome CDM is already broken, L3 widely so.
I don’t know what pirates actually do, but the encryption for HDMI has been broken so they could capture the video stream as it leaves the GPU.
Some of this is hilarious. I have an analogue 5.1 surround-sound system. When I got an Xbox One, I needed a S/PDIF adaptor to decode DTS and provide the analogue output (everything prior to that had been able to generate the analog output directly). Then the Series X removed S/PDIF and so I needed to extract the audio stream. I bought a small (and cheap) box that had HDMI in and out and a bunch of DIP switches that would specify the formats that the HDMI channel advertised that it supported (S/PDIF doesn’t have content negotiation, the sender picks a format and the receiver has to be able to handle it, HDMI lets the receiver specify the formats it supports and the sender pick the best one that it can do). To do this, it has to decrypt the HDMI stream. There’s another DIP switch that you can set to specify whether it then reencrypts it. If you set this in one position, you have an end-to-end secure path for video. In the other, the video is plaintext between the box and the screen. Putting it in the encrypt mode consumes power and adds latency. There is no reason that anyone would ever run it in that mode, it exists solely so that the manufacturers can claim that it isn’t intended for piracy (I’m not using it to pirate the video streams, I just want to be able to get audio out of my console, but it does provide digital audio and video streams, so you could easily connect them to a hardware compressor and get a movie file out).
This is also possible and is done but is not the preferred mechanism as it requires re-encoding the video which reduces quality. Most files are available in the original encoding with DRM stripped.
It doesn’t need to be perfect, it just needs to make it hard enough for the layman to be unable to do it. In the olden days, average people were sharing VHS tapes with things they recorded. Now you have to be someone like us to download whatever movies you want.
Tangential but funny when it comes to GPU and DRM from my favourite(?) talk of 38c3 - https://app.media.ccc.de/v/38c3-the-master-key . I fondly recall the coffee-snort the day the keys just appeared on pastebin, thinking it was an insider or supply-chain job.
The cryptography engineering fail itself makes it up there in the PS3 ‘randomly chosen by fair dice roll’ approach to key generation - specifically how the secure mode transition by first checking the calculated MAC in the secure enclave, and leaving the correct value in non-secure readable registers when returning. Swap registers, try again and success!
This has an odd focus on preventing piracy of media; my concern is being able to run whatever OS and application software I want, with whatever patches I want. Additionally, as long as I correctly implement the network protocols, I would rather like to be able to interact with the rest of the world.
It’s not an odd focus, it’s just something you don’t care about. The motivating quote of the article is specifically about media streaming DRM.
The very first lines of the post are:
So this post is a response to the FSF’s comment, and it was the FSF’s choice to put the focus there.
Luckily, the author has written about that topic at length as well, just not in this specific blog post addressing one other specific claim.
Pretty informative to a DRM dilettante such as myself.
Interesting that Intel isn’t deploying these in consumer chips anymore, though I guess it makes sense since long BIOS update timelines make the DRM usecase very unattractive.
How did high-definition content DRM ever work on Intel Macs if this is true?
SGX had very strong security guarantees. It used a Merkel tree over memory, with the root in cache, so you couldn’t do things like replay attacks even if you had malicious DRAM chips. This limited the amount of memory it could support (64 MiB originally, later ones almost doubled this).
That made it a lot of hardware complexity for some fairly niche use cases. The main non-DRM use case was storing hard disk encryption keys in the enclave, so you could do full-disk encryption with the CPU’s AES acceleration but not leak the keys to the kernel. For most threat models, this wasn’t much better than having the keys in a separate VM.
Then Spectre came along. It turned out to be incredibly hard to secure SGX against transient execution attacks. At the same time, the thing big customers (Google, Amazon, Microsoft) actually wanted was the ability to run customers’ VMs or containers with strong attestable guarantees that the cloud provider couldn’t see or tamper with the contents. We actually did build a thing for running Linux containers in SGX on Azure but it was not good and never made it to a shipping product for various reasons. The SGX programming model is very hard to secure against this kind of attack and so it proved to be a dead-end technology.
TDX is intended as the replacement and is built around VMs. You can create a VM that has shared (insecure) memory with a userspace process to emulate the SGX programming model, but no one really wants this. VM exits are fairly rare now, so doing some aggressive flushing on each transition hurts less than on SGX. TDX is not good, but it’s solving a problem that is at least adjacent to the ones customers want (Arm CCA is actually solving the right problem).
Alas, sgx is Swiss cheese. Keys have been extracted from it quite easily. I think that’s part of the reason why they got rid of it because it’s so defective.
Isn’t that what HDCP is supposed to do?
No, that’s only about encrypting the stream on the display cable.
I wonder how difficult it would be to get access to whatever region of memory contains the decrypted video content? I suppose this would require hardware support and there probably aren’t good use cases that would make said hardware economically viable beyond piracy? Maybe you just read it off the HDMI port?