Real-world virtualization benchmarking: the best server CPUs compared
by Johan De Gelas on May 21, 2009 3:00 AM EST- Posted in
- IT Computing
Conclusions so Far
Both VMmark and vApus Mark I seem to give results that are almost black and white. They give you two opposite and interesting data points. When you are consolidating extremely high numbers of VMs on one physical server, the Xeon Nehalem annihilates, crushes, and walks over all other CPUs including its own older Xeon brothers… if it is running VMware ESX 4.0 (vSphere). Quickly looking at the VMmark results posted so far seems to suggest you should just rip your old Xeon and Opteron servers out of the rack and start again with the brand-spanking new Nehalem Xeon. I am exaggerating, but the contrast with our own virtualization benchmarking was quite astonishing.
vApus Mark I gives the opposite view: the Xeon Nehalem is without a doubt the fastest platform, but the latest quad-core Opteron is not far behind. If your applications are somewhat similar to the ones we used in vApus mark I, pricing and power consumption may bring the Opteron Shanghai and even the Xeon 54xx back into the picture. However, we are well aware that the current vApus Mark I has its limitations. We have tested on ESX 3.5 Update 4, which is in fact the only available hypervisor from VMware right now. For future decisions, we admit that testing on ESX 4.0 is a lot more relevant, but that does not mean that the numbers above are meaningless. Moving towards a new virtualization platform is not something even experienced IT professionals do quickly. Many scripts might not work properly anymore, the default virtualization hardware is not compatible between the hypervisor, etc. For example, ESX 3.5 servers won't recognize the version 7 hardware from ESX 4 VMs. In a nutshell: if ESX 3.5 is your most important hypervisor platform, both the Xeon 55xx, 54xx, and quad-core Opteron are very viable platforms.
It is also interesting to see the enormous advances CPUs have made in the virtualization area:
- The latest Xeon 55xx of early 2009 is about 4.2 times faster than the best 3.7GHz dual-core Xeons of early 2006.
- The latest Opterons are 2.5 times better than the slightly faster clocked 3.0GHz dual-core Opterons of mid 2007, and based on this we calculate that they are about 3 times faster than their three year older brothers.
Moving from the 3-4 year old dual-core servers towards the newest quad-core Opterons/Xeons will improve the total performance of your server by about 3 to 4 times.
What about ESX 4.0? What about the hypervisors of Xen/Citrix and Microsoft? What will happen once we test with 8 or 12 VMs? The tests are running while I am writing this. We'll be back with more. Until then, we look forward to reading your constructive criticism and feedback.
I would like to thank Tijl Deneut for assisting me with the insane amount of testing and retesting; Dieter Vandroemme for the excellent programming work on vApus; and of course Liz Van Dijk and Jarred Walton for helping me with the final editing of this article.
66 Comments
View All Comments
has407 - Sunday, May 24, 2009 - link
Thanks very much for the additional data points, and especially for providing details. Still digesting your data (thanks again!), but a few thoughts...1. At the risk of being pedantic... Both VMark and vAplus scores are dimensionless. It would be better to avoid terms such as "faster" to describe them; IMHO, that has lead to distraction (or *cough* in some cases *cough* irrationality). Is a car that can move 5 people at 160KPH "faster" than a bus that can move 20 people at 80KPH? Maybe sticking to terms such as "throughput" or simply "performance" would be better.
2. While the geometric mean provides a nice single score, I hope you will continue to publish the detailed numbers that contribute to it (as done with VMark disclosures). The individual scores provide important clues as to whether a closer look is warranted, whether of the workload mix, the CPU, or the hypervisor.
For example, the sum of the workload (or arithmetic mean * 4) provides total overall throughput, which is an important indicator; in an ideal world that should match the geometric mean. A significant difference between those suggests a closer look is warranted.
E.g., Unless you have a workload mix that can soak up extra CPU cycles, the Xeon 5080 and to a lesser extent the Opteron 2222 don't look like good choices. For the 5080, the CPU-intensive OLAP VM contributes 60% to the result, whereas the others tend to be ~40-45%, and the difference between the geometric and arithmetic mean for the 5080 is 19%, whereas for the rest it's <14%.
3. Note what happens if you pull the CPU-intensive OLAP VM out of the picture. While I can't empirically test that, and I'm using a bit of a sledgehammer here... Eliminate it from the scoring and see what happens: the difference between the geometric and arithmetic mean drops to ~1% across the board.
Moreover, the ratio of the scores with and without the OLAP VM is quite constant, with a correlation > 0.999. The outliers again, but not by all that much, being the Xeon 5080 and the Opteron 2222, and to a lesser extent the Xeon L5350.
4. In short, I'm not sure what the addition of a CPU-intensive VM such as OLAP is adding to the picture, other than soaking up CPU cycles and some memory. A CPU-intensive VM is the easiest (or should be the easiest) for a hypervisor to handle, and appears to tell us little more than what idle time figures would tell us. In the case of the Xeon 5080 and Opteron 2222, it also appears to inflate their overall score (whether due to the processor or hypervisor, or more likely a combination of the two, is unclear).
5. That said, maybe it would be good to include a CPU-intensive VM in the mix, if for no other reason than to highlight those systems or hypervisors where that VM scores higher or lower than expected (e.g., the Xeon 5080 and Opteron 2222). However, I'd bet you can achieve the same result with a lot less work using a simpler synthetic CPU/memory-intensive test in the VM.
OTOH, maybe artificially driving CPU utilization towards 100% with such CPU-intensive VM's doesn't really tell us much more than we'd know without them--as IMHO my admittedly crude analysis suggests--and that vAplus might be a better indicator for those looking for clues as to appropriate workload allocation among virtualized systems, rather than those looking for a single magic number to quantify performance.
JohanAnandtech - Tuesday, May 26, 2009 - link
"However, I'd bet you can achieve the same result with a lot less work using a simpler synthetic CPU/memory-intensive test in the VM. "That would eliminate the network traffic. While the "native" running database is not making the OS kernel sweat, the hypervisor does get some work from the network, and thus this VM influences the scores of the other VMs. It is not a gigantic effect but it is there. And remember, we want to keep control of what happens in our VMs. Once you start running synthetic benches, you have no idea what kind of instructions are run. SQL server is closed source too, but at least we know that the instructions which will be send to the CPU will be the same as in the real world.
We will of course continue to publish all the different scores so that our inquisitive readers can make their minds :-). Nothing worse than people who quickly gloss over the graphs and than start ranting ;-).
Thanks for the elaborate comment, although I am still not sure why you would remove the OLAP database. The fact that the 4 core machines (Dempsey, dualcore opteron) do not have a lot of cycles left for the other VMs illustrates what happens in an oversubscribed system where one VM demands a lot of CPU power.
has407 - Wednesday, May 27, 2009 - link
Johan -- My thought was not so much whether to get rid of the OLAP VM, than whether a simpler CPU-intensive VM would suffice, synthetic or otherwise. However, that's probably an academic question at this point, as you've already got it the mix. (And a question I probably spent too much time thinking-out-loud about in my post. :)The other arguably more important questions are whether including CPU-intensive VM's (OLAP or synthetic) in order to drive CPU utilization to 100% easier--especially as it is 25% of the workload--provides significant additional information, and whether is more representative than the VMark approach.
That's a much harder question to answer, and far more difficult to model. Real-world benchmarks may be desirable and necessary, but they are not sufficient; a representative and real-world workload mix is also needed. What constitutes a "representative and real-world" mix is of course the Big Question.
I'll spare everyone more thinking-out-loud on that subject :), other than to say that benchmarks should help us understand how to characterize and model to more accurately predict performance. Without that we end up with lots of data (snapshots of workload X on hardware Y), but little better formal or rigorous understanding as to why. (One area where synthetic- or micro-benchmarks can help provide insight, as much as they might be derided. And one reason IMHO why what passes for most benchmarking today contributes more noise than signal. But that's another subject.)
In any case, it's good to have vApus to provide additional data points and as a counterpoint to VMark. Thanks again. Looking forward to the next round of data.
has407 - Monday, May 25, 2009 - link
Sorry, fourth column in table labeled "B:GM" (duplicate of third column label) should be "B:AM".has407 - Monday, May 25, 2009 - link
p.s. here's the numbers on which that post was based, calculated using your raw data...A - With OLAP VM
B - Without OLAP VM
GM - Geometric mean
AM - Arithmetic mean
- A:GM -- geometric mean of all four VM's * 4
- A:AM -- arithmetic mean of all four VM's * 4 (or the sum of the individual scores).
- B:GM -- geometric mean of the three VM's excluding the OLAP VM * 3.
- B:AM -- arithmetic mean of the three VM's excluding OLAP * 3 (or the sum of the individual scores excluding the OLAP VM).
A:GM A:AM B:GM B:GM A:GM/B:GM
2.03 2.14 1.28 1.29 1.58 Dual Opteron 8389 2.9
2.45 2.54 1.60 1.60 1.54 Dual Xeon X5570 2.93
2.08 2.21 1.29 1.29 1.61 Dual Xeon X5570 2.93 HT off
1.87 1.99 1.16 1.17 1.61 Dual Xeon E5450 3.0
1.68 1.81 1.02 1.02 1.65 Dual Xeon X5365 3.0
1.12 1.22 0.68 0.68 1.66 Dual Xeon L5350 1.86
0.59 0.78 0.30 0.31 1.96 Dual Xeon 5080 3.73
0.82 0.96 0.45 0.46 1.80 Dual Opteron 2222 3.0
Correlation( A:GM, B:GM ): 0.9993
Hope that helps explains my conclusions.
solori - Friday, May 22, 2009 - link
I'm glad to see Johan's team has gone beyond the "closed" VMmark standard with a Windows-based benchmark and I hope this leads to more sanity-checking of results down the line. However, the first step is verifying the process before you get to the results. Here's an example of where you're leaving some issues dangling:"However, the web portal (MCS eFMS) will give the hypervisor a lot of work if Hardware Assisted Paging (RVI, NPT, EPT) is not available. If EPT or RVI is available, the TLBs (Translation Lookaside Buffer) of the CPUs will be stressed quite a bit, and TLB misses will be costly."
This implies RVI is defaulted for 32-bit VM's. VMware's default for 32-bit virtual machines is BT (binary-translation) and not RVI, even though VROOM! tests show a clear advantage for RVI over BT for most 32-bit workloads. While you effectively discuss the affects of disabling RVI in the 64-bit case, you're unclear about "forcing" RVI in the 32-bit case. Are you saying that AMD-v and RVI are enabled for the 32-bit workloads by default? VMware's guidance states otherwise:
"By default, ESX automatically runs 32bit VMs (Mail, File, and Standby) with BT, and runs 64bit VMS (Database, Web, and Java) with AMD-V + RVI."
- VROOM! Blog, http://blogs.vmware.com/performance/2009/03/perfor...">http://blogs.vmware.com/performance/200...uation-o...
This guidance is echoed in the latest VI3.5 Performance Guide Release:
"RVI is supported beginning with ESX 3.5 Update 1. By default, on AMD processors that support it ESX Update 1 uses RVI for virtual machines running 64-bit guest operating systems and does not use RVI for virtual machines running 32-bit guest operating systems.
Although RVI is disabled by default for virtual machines running 32-bit guest operating systems, enabling it for certain 32-bit operating systems may achieve performance benefits similar to those achieved for 64-bit operating systems. These 32-bit operating systems include Windows 2003 SP2, Windows Vista, and Linux.
When RVI is enabled for a virtual machine we recommend you also?when possible?configure that virtual machine?s guest operating system and applications to make use of large memory pages."
- Performance Best Practices and Benchmarking Guidelines, VMware, Inc. (page 18)
Your chart on page 9 further indicates "SVM + RVI" for 32-bit hosts, but there is no mention of steps you took to enable RVI. This process is best described by the Best Practices Guide:
"If desired, however, this can be changed using the VI Client by selecting the virtual machine to be configured, clicking Edit virtual machine settings, choosing the Options tab, selecting Virtualized MMU, and selecting the desired radio button. Force use of these features where available enables RVI, Forbid use of these features disables RVI, and Allow the host to determine automatically results in the default behavior described above."
- Performance Best Practices and Benchmarking Guidelines, VMware, Inc. (page 18)
So, which is it: 32-bit without RVI or undocumented changes to the VMM according to VMware guidance? If it is the former, the conclusions are misleading (as stated); if the latter, such modifications should be stated explicitly since they do not represent the "typical" or "default" configuration for 32-bit guests. This oversight does not invalidate the results of the test by any means, it simply makes them more difficult to interpret.
That said, a good effort! You may as well contrast 32-bit w & w/o RVI - those results might be interesting too. I know you guys probably worked VERY hard to get these results out, and I'd like to see more, despite what "tshen83" thinks :-)
Collin C. MacMillan -- http://solori.wordpress.com">http://solori.wordpress.com
JohanAnandtech - Sunday, May 24, 2009 - link
Hi Collin,I was under the impression that ESX now choses RVI+SVM automatically, but that might have been ESX 4.0. I am going to check again on monday, but I am 99.9% sure we have enabled RVI in most tests (unless indicated otherwise) as it is a best performance practice for the Opterons.
alpha754293 - Friday, May 22, 2009 - link
Another excellent, thorough, well researched article.Thanks! :o)
JohanAnandtech - Sunday, May 24, 2009 - link
You are most welcome. Thx for letting us know!knutjb - Monday, May 25, 2009 - link
Thanks for presenting another point of view. When I read the original article showing the new Xeons so far ahead, I was skeptical. Rarely does a company produce a product that is such a huge leap, not only over their competitors, over their own products too. When there is only one primary benchmark the results can be skewed. Also, the wide variety of software combinations is eyepopping so it is very time consuming to create a resonable balance using real databaeses for a different, but valid benchmark.Thanks for the hard work, I look forward to reading more on this subject.