Not to hijack the empty discussion, but I have always found linux more conscious to focusing on the core product than windows. Perhaps with the stack ranking system gone people will take appropriate risks to clean shop instead of implementing new features and stop worrying about politics?
One reason this may manifest is that Linus really sees Linux as just Linux–the kernel & userspace APIs. Things like databases and build systems are things that run on Linux, but they’re not part of it.
Compare this methodology to Windows, where IE is a part of it, and so is the GUI, and so are many other parts. As a result, they’re also focusing on their “core product”, but that product has a lot more surface area.
This is the reason im excited for new innovations in CPU architecture (i.e. the mill cpu). So many of our three point some billion instructions per second are being wasted waiting on memory. Its just absurd.
I suspect that’s really mostly because CPUs are marketed by performance, while RAM is marketed by capacity. There’s no particularly inherent reason memory needs to be so much slower than CPU, but when all the market pressures are for it to be bigger and cheaper rather than faster, it’s obviously not going to catch up.
There’s a lot more in play than just “market pressures”. Take, for example, the L1 cache in a recent CPU. This is an on-chip memory (SRAM) much smaller than main memory (say 32K), and with a vastly higher cost-per-unit-size. And it’s still “slower” than the CPU, in that it’ll typically take multiple cycles to access. The biggest constraints aren’t economic, they’re physical – electrical signals propagate through wires at finite speeds, and as your memory gets bigger (even at just a few KB of SRAM) this starts to be a non-negligible factor. So if you wanted a memory that was “as fast as your CPU”, you could build it, but it’d be so tiny as to be completely unusable. Imagine your register file being all the memory you had.
Notably, the PlayStation 4 has a unified memory system with 8 GB GDDR5—it clocks 176 GB/s! Maybe it’ll persuade other manufactures to adopt similar architectures?
But the problem is that for many (most?) workloads, the relevant aspect of memory performance that’s problematic isn’t bandwidth, it’s latency – and the two are often at odds with each other.
Not to hijack the empty discussion, but I have always found linux more conscious to focusing on the core product than windows. Perhaps with the stack ranking system gone people will take appropriate risks to clean shop instead of implementing new features and stop worrying about politics?
One reason this may manifest is that Linus really sees Linux as just Linux–the kernel & userspace APIs. Things like databases and build systems are things that run on Linux, but they’re not part of it.
Compare this methodology to Windows, where IE is a part of it, and so is the GUI, and so are many other parts. As a result, they’re also focusing on their “core product”, but that product has a lot more surface area.
Exactly. FreeBSD is also another good example, FreeBSD is the whole system, with the kernel just being one part.
This is the reason im excited for new innovations in CPU architecture (i.e. the mill cpu). So many of our three point some billion instructions per second are being wasted waiting on memory. Its just absurd.
I suspect that’s really mostly because CPUs are marketed by performance, while RAM is marketed by capacity. There’s no particularly inherent reason memory needs to be so much slower than CPU, but when all the market pressures are for it to be bigger and cheaper rather than faster, it’s obviously not going to catch up.
There’s a lot more in play than just “market pressures”. Take, for example, the L1 cache in a recent CPU. This is an on-chip memory (SRAM) much smaller than main memory (say 32K), and with a vastly higher cost-per-unit-size. And it’s still “slower” than the CPU, in that it’ll typically take multiple cycles to access. The biggest constraints aren’t economic, they’re physical – electrical signals propagate through wires at finite speeds, and as your memory gets bigger (even at just a few KB of SRAM) this starts to be a non-negligible factor. So if you wanted a memory that was “as fast as your CPU”, you could build it, but it’d be so tiny as to be completely unusable. Imagine your register file being all the memory you had.
Notably, the PlayStation 4 has a unified memory system with 8 GB GDDR5—it clocks 176 GB/s! Maybe it’ll persuade other manufactures to adopt similar architectures?
But the problem is that for many (most?) workloads, the relevant aspect of memory performance that’s problematic isn’t bandwidth, it’s latency – and the two are often at odds with each other.