The Best Server CPUs Compared, Part 1
by Johan De Gelas on December 22, 2008 10:00 PM EST- Posted in
- IT Computing
If you skipped to this page immediately, you can find our
"market analysis" on the previous page.
Looking at the Server CPUs from the point of view of the market was surprising and refreshing. The whole problem with running every benchmark you can get your hands on is that it just gets confusing. Sure we can have 10 more benchmarks that can be categorized under "other", but if either the Xeon or Opteron wins them, would that give you a better view of the market? That is why we decided to focus on finally getting that Oracle and MCS benchmark right. That is also why we rely on the more reliable industry standard benchmarks to make our analysis complete.
Right now, it is clear that the latest AMD Opteron is in the lead. We are really at the pivotal moment in time. No matter how good the current Xeon "Harpertown" and "Dunnington" architectures are, they lose too many battles due to the platform they are running on. The FSB architecture is singing its swan song. Only a small part of the market, namely:
- The ERP people who don't care about power, but who need the highest performance at any cost
- The HPC people who have extremely intensive code which does not work on sparse matrices
- The people who render
…can ignore the shortcomings of the FSB-based platform.
For most other applications, the AMD platform is simply better in price/performance and performance/watt (see our previous Shanghai review). It won't last long though, as the performance that the new Nehalem architecture has shown in OLTP, ERP, and OLAP is simply amazing. Moreover, there is little doubt that the dual Xeon 5570 with 34GB/s of bandwidth (dual Opteron is 20-21GB/s) will shine in HPC too. AMD servers can use the HyperTransport 3.0 and higher clock speeds to counter this, but that is for a later article….
A big thanks to Tijl Deneut for assisting me with the hundreds of benchmarks we ran in the past month.
References
[1] Choosing the Right Hardware for Server Virtualization (IDC Paper sponsored by: Intel), Ken Cayton, April 2008
[2] IDC's European Server virtualization forecast, July 2008
29 Comments
View All Comments
zpdixon42 - Wednesday, December 24, 2008 - link
DDR2-1067: oh, you are right. I was thinking of Deneb.Yes performance/dollar depends on the application you are running, so what I am suggesting more precisely is that you compute some perf/$ metric for every benchmark you run. And even if the CPU price is less negligible compared to the rest of the server components, it is always interesting to look both at absolute perf and perf/$ rather than just absolute perf.
denka - Wednesday, December 24, 2008 - link
32-bit? 1.5Gb SGA? This is really ridiculous. Your tests should be bottlenecked by IOJohanAnandtech - Wednesday, December 24, 2008 - link
I forgot to mention that the database created is slightly larger than 1 GB. And we wouldn't be able to get >80% CPU load if we were bottlenecked by I/Odenka - Wednesday, December 24, 2008 - link
You are right, this is a smallish database. By the way, when you report CPU utilization, would you take IOWait separate from CPU used? If taken together (which was not clear) it is possible to get 100% CPU utilization out of which 90% will be IOWait :)denka - Wednesday, December 24, 2008 - link
Not to be negative: excellent article, by the waymkruer - Tuesday, December 23, 2008 - link
If/When AMD does release the Istanbul (k10.5 6-core), The Nehalem will again be relegated to second place for most HPC.Exar3342 - Wednesday, December 24, 2008 - link
Yeah, by that time we will have 8-core Sandy Bridge 32nm chips from Intel...Amiga500 - Tuesday, December 23, 2008 - link
I guess the key battleground will be Shanghai versus Nehalem in the virtualised server space...AMD need their optimisations to shine through.
Its entirely understandable that you could not conduct virtualisation tests on the Nehalem platform, but unfortunate from the point of view that it may decide whether Shanghai is a success or failure over its life as a whole. As always, time is the great enemy! :-)
JohanAnandtech - Tuesday, December 23, 2008 - link
"you could not conduct virtualisation tests on the Nehalem platform"Yes. At the moment we have only 3 GB of DDR-3 1066. So that would make pretty poor Virtualization benches indeed.
"unfortunate from the point of view that it may decide whether Shanghai is a success or failure"
Personally, I think this might still be one of Shanghai strong points. Virtualization is about memory bandwidth, cache size and TLBs. Shanghai can't beat Nehalem's BW, but when it comes to TLB size it can make up a bit.
VooDooAddict - Tuesday, December 23, 2008 - link
With the VMWare benchmark, it is really just a measure of the CPU / Memory. Unless you are running applications with very small datasets where everything fits into RAM, the primary bottlenck I've run into is the storage system. I find it much better to focus your hardware funds on the storage system and use the company standard hardware for server platform.This isn't to say the bench isn't useful. Just wanted to let people know not to base your VMWare buildout soley on those numbers.