Today I am starting a new series of short posts designed to help SQL Server administrators with virtualized SQL Servers twist the mind a bit and think about virtualization in a different light. I call it “Smart Moves with SQL Server VMs“.
Virtualization technologies for non-VM-admins usually result in just P2V’ing a system and then forgetting about it. Once virtual, the world can change (for the better, I promise) in ways not usually considered, and you – the data manager – can dramatically benefit from embracing the new technology.
The first post in this series discusses a trick with using virtual machine proximity when dealing with large volumes of data movement, such as nightly ETL processes, application server data handling, or database backups.
First, we’ll be using a free utility called iperf to demonstrate the raw performance differences here. You can read more about how to use it at this blog post.
So the scenario is as follows. You have two servers – a SQL Server VM that runs a nightly ETL process, and an application server VM where you are pulling large quantities of data from. The traffic occurs over the physical network.
A quick iperf test can show the total possible network bandwidth between the two servers.
This test shows that we’ve got just about the maximum possible bandwidth between the two servers on this 1Gb network. At this point, the data that you will transfer is most likely limited by this bandwidth bottleneck.
Now, what if you were to place both of these VMs on the same physical machine? If the networking is set up properly, the situation can change. You can rerun the same iperf test and see the following results.
We get an over 15x performance boost in this scenario.
Your network traffic now is passing through the backplane of the physical machine and not even touching the physical network. All of it is transparent to the SQL Server, operating system, or application code. There are no code changes, no funny tricks with Windows Server networking…. just a very dramatic performance boost.
Think about this performance difference for a moment. With the networking stack out of the picture, your large data movement bottleneck could now be the storage speed reading from disk, or it could be the CPU scalability of the ETL process itself. Both of these bottlenecks have much higher thresholds than the networking stack’s performance.
Your large volume data transfer process is virtually sure to experience a performance improvement, and therefore cut down the run time of the process and resource consumption on the networking stack.
The best part is that both VMware vSphere and Microsoft Hyper-V have PowerShell interfaces into the hypervisors, and scripting out a command to co-locate these two VMs on the same host is a relatively trivial task. All you need are simple permissions from the virtualization admins to achieve this. It can be programmed to execute as a prerequisite step in the ETL processes or jobs that perform these tasks.
Go forth and improve performance! More tips and tricks like this are coming in the following weeks!
I am trying to implement this idea.
We use VMWare vSphere 5.5 and conducted a test on two VMs.
Regarding the statement “All of it is transparent to the SQL Server, operating system, or application code”: I interpreted that as “no changes required to any network setting”. Is that correct?
All our infrastructure has MTU set to 1500.
Here are my iPerf results so far:
— 1.8 Gbits/s two VMs on different hosts.
— 1.8 Gbits/s two VMs on same host, no network settings changed
— 5.7 Gbits/s two VMs on same host, with Jumbo frames set to 9000 in both VMs
– I thought that no network configuration changes would be required.
– On the plus side, this seems to prove that the network traffic between these two VMs is more direct now (we did not set Jumbo frames anywhere else outside the two test VMs)
Nevertheless, even with Jumbo frames, I was not able to get transfer rates as high as yours.
Do you have any advice to improve networking performance here?
I tested the in-memory transfer performance inside these VMs, using Geekbench: the memory bandwidth reported is between 3.8 and 5.3 GB/sec. Our VMWare hosts are using AMD CPUs.
Could this explain why my iPerf results are lower than yours?
Is there anything that we could do to improve the memory transfer performance inside our VMs?
Lots of questions 🙂
Thank you in advance!
Great question, and thanks for posting! It could be a number of things. Are these on blade servers instead of rack mount servers? I’ve seen blade servers push all traffic out to the network, even if they are on the same virtual network port group (have you checked this too?). The AMD CPUs could have something to do with with it, but I doubt it’s a huge contributor.
No changes in-guest are needed to get these throughput numbers that you saw, but I’ve seen numbers all over the place with performance on various platforms. Is it 1GbE networking or 10GbE+? Are they on the same port group? What else is sharing the port group traffic? Is anything in between the two VMs limiting overall throughput (i.e. blade vNIC presentation)?