Why oh why does a web browser have a table of USB vendor IDs? I certainly haven’t dug into the details of what it’s used for, but on the face of it this is just another brick in the wall of why I hate browsers.
On one hand, it’s nice when you get something for free, and the article itself is illuminating. And constantly shaving down a half a percent here, another percent there, is how things become fast.
On the other hand, I’m confused why this is ‘big’. I largely don’t care about on-disk size, and 200kb per process is not much compared to how much gets used rendering a web page.
On the other hand, I’m confused why this is ‘big’. I largely don’t care about on-disk size, and 200kb per process is not much compared to how much gets used rendering a web page.
It’s big if you consider CPU cache sizes. If this improves cache locality then it’s a huge win regardless if you care about 200 kb of RAM or HDD usage. Making it a read only segment might have the added benefit of other CPU’s not needing to check if the copy they hold in their cache has become dirty.
But most of the data mentioned was not likely to be in the hot path, and caches don’t care about where the cache lines are physically located in ram. If it was removing duplicated accesses of data from the hot path, that would be significant.
Also, that’s not how caches handle mutation. Because the OS can remap the data at any point, the cache needs to be able to handle the data being mutated at any point. However, they also generally run with the assumption that the data will not be mutated until another CPU shoots down their cache entry.
The win here isn’t necessarily in cache usage. The ‘big’ wins here are in bug detection (the data can’t be overwritten as it’s in read-only pages, and and attempt to overwrite is caught as a segfault) and faster runtime (while processes can share read-write pages as long as they’re not changed, the first write to such a page (per process) takes a penalty as its copied; also, such pages don’t have to be written to swap; they can be dropped and reloaded directly from the executable). The lower RAM usage is nice too.
Tangential:
Why oh why does a web browser have a table of USB vendor IDs? I certainly haven’t dug into the details of what it’s used for, but on the face of it this is just another brick in the wall of why I hate browsers.
There’s a USB API for Chrome Apps
These are the apps for Chrome OS, basically. Not really any other way for ChromeOS to use its USB ports, I suppose
Well user tracking obviously :)
[Comment removed by author]
This is a bit surprising, honestly, despite n-too-many years of C++. The size shrinkage he reports indicates I’m not the only one in this boat.
It is easy to believe constants are nearly free to initialize and use when that isn’t guaranteed.
On one hand, it’s nice when you get something for free, and the article itself is illuminating. And constantly shaving down a half a percent here, another percent there, is how things become fast.
On the other hand, I’m confused why this is ‘big’. I largely don’t care about on-disk size, and 200kb per process is not much compared to how much gets used rendering a web page.
It’s big if you consider CPU cache sizes. If this improves cache locality then it’s a huge win regardless if you care about 200 kb of RAM or HDD usage. Making it a read only segment might have the added benefit of other CPU’s not needing to check if the copy they hold in their cache has become dirty.
But most of the data mentioned was not likely to be in the hot path, and caches don’t care about where the cache lines are physically located in ram. If it was removing duplicated accesses of data from the hot path, that would be significant.
Also, that’s not how caches handle mutation. Because the OS can remap the data at any point, the cache needs to be able to handle the data being mutated at any point. However, they also generally run with the assumption that the data will not be mutated until another CPU shoots down their cache entry.
The Wikipedia article is actually pretty decent: https://en.wikipedia.org/wiki/MESI_protocol
The win here isn’t necessarily in cache usage. The ‘big’ wins here are in bug detection (the data can’t be overwritten as it’s in read-only pages, and and attempt to overwrite is caught as a segfault) and faster runtime (while processes can share read-write pages as long as they’re not changed, the first write to such a page (per process) takes a penalty as its copied; also, such pages don’t have to be written to swap; they can be dropped and reloaded directly from the executable). The lower RAM usage is nice too.