Why I love phoronix: despite running the benchmarks on the same machine, the CPU is different for os x and Linux.
“We need a table with data. This is data. Put it in the table!”
I wonder if the reported cpu differences come from osx using cpu scaling, and the others not. I think the 2.6ghz is the non-turbo speed of the cpu, and the 3.10Ghz is the highest turbo speed.
It would be extra silly (for the test results) if the OSX system was the only one using cpu scaling properly.
Yeah, I almost never trust Phoronix, their stuff is always so dubious.
I do believe OS X is slower than Linux in benchmarks, Apple isn’t optimizing OS X as a server operating system. There are no surprises there.
Phoronix has gone out of their way to make all their benchmarking easily reproducible. You should check out some of their work:
All of the hardware was the same throughout testing: the reported differences on the automated table above just come down to differences in what the OS reports, such as the difference between the CPU base frequency and turbo frequency, etc
Exactly. So what the hell is the point of the table?
Sorry, I misunderstood your complaint. :)
I’m continually amazed at how far behind Apple is on disk subsystem performance.
I chuckle when I remember this being part of their advertising: https://web.archive.org/web/20060202051022/http://www.apple.com/macosx/features/unix/
HFS/HFS+ has an unusual filesystem design, which can impact performance characteristics (not sure if that’s the main factor here, but it often is). It does metadata a lot differently than Unix filesystems; rather than inode tables, it has a centralized B-tree-structured catalog file. This is nice for some things, but results in a global (filesystem-wide) write lock for metadata updates.
HFS+ is closer to FAT32 design wise than other filesystems. NTFS, while old, is pretty solid. There’s a bunch of cruft from HFS+, inherited from HFS like resource forks and creator types, with ACLs grafted on.
I think it’s at the behest of Adobe, which isn’t prepared for the backwards compatibility changes like case insensitivity.
They could change a lot about the filesystem while keeping the interface compatible. But they have no particular incentive to do so. The SSDs in the new Macbooks are stupid fast, that’s good enough.
It’s hard for me to believe some of those benchmarks were measuring quite the same thing. 100 TPS from a spinning disk sounds about right. 5000 is not possible without batching, in which case they aren’t really “transactions”, are they?
Heh. Found some old postmark results. http://www.shub-internet.org/brad/FreeBSD/postmark.html
A PowerBook g4 running 10.2 cranks out several times more TPS than the Mac mini tested here. At least for one set of test parameters.
I use both Linux and OSX every day, and my real world experience is that they’re about the same on similar machines. Graphics on OSX feel faster, and are definitely easier to configure, but that’s subjective, and my opinion.
In any case, the results of this benchmark seem suspect to me. I’d believe OSX is 10% behind Linux on disk performance while running a build, but I find it really hard to believe it takes 3-5 times longer. It just doesn’t match my experience.
I’d like to see somebody else confirm the results, because it’s a little hard to believe coming from a Linux fanboy site like Phoronix.