Two weeks ago I presented my session entitled “Squeezing Top Performance from Your Virtualized SQL Server” at the SQL PASS Summit in Charlotte, North Carolina. One comment that I made during my presentation that seemed to spark some debates is how I normally enable the ‘Lock Pages in Memory’ feature of SQL Server as part of my default build template unless a good reason exists not to do so. This recommendation contradicts other expert opinions from some of the people that I greatly respect and admire. I have continued to receive emails asking me to debate and weigh the pros and cons of this, and I’ll present my reasons why I enable it by default, as well as why or why not you should enable or ignore this setting in your environment.

First, let’s dig into this setting a little deeper. Then we will work on the reasons specific to your environment and application that you might help you to want to enable or ignore the setting.


Lock Pages in Memory (or LPIM) became a standard setting for SQL Server installations with the introduction of 64-bit SQL Server 2005. Occasionally SQL Server would be forced to page out large quantities of memory randomly because of memory pressure from within Windows Server. When this action would take place, SQL Server’s performance would be temporarily reduced, and usually caused very negative affects to the application performance. Windows would instigate this action because of memory pressure, usually from background application activity or memory leaks in applications or drivers.

Enabling LPIM simply instructs SQL Server to ignore the request from Windows to give up memory.

SQL Server is designed to attempt to keep fetched data pages in memory for as long as possible to reduce disk I/O. If that data that is normally read from memory is now back on disk, the performance impact is that it takes much longer to complete the task, all while driving up disk activity. The community has been torn as to when and why to enable this feature for quite some time, and there never will be a “right” answer to this riddle.

However, understanding your environment and your system performance characteristics can help you determine the right setting for your application.

Now Introducting Virtualization

Now, let’s say your key SQL Servers are all virtualized under VMware vSphere. I leave out Microsoft Hyper-V here because Hyper-V will not let an administrator overcommit memory on an individual host. Dynamic Memory is different, and I will cover that in an upcoming post.

VMware vSphere has the ability to allow the administrators to overcommit a host’s memory resources. To translate with an example, an administrator can provision virtual machines configured with 100GB of cumulative allocated memory while the host they reside on only contains 64GB of memory, and the VMs will all turn on successfully.

However, if all of the virtual machines approach that 64GB of memory actually used on the host, the hypervisor detects this limit and starts to turn on features designed to maintain the host’s availability. One of the most disruptive features is VM memory ballooning. Ballooning forces Windows to start swapping some of it’s memory that is in use. The feature is installed as a driver with the VMware Tools, which is a standard for VMware-based virtual machines. Ballooning is triggered when the host reaches a level of memory overcommitment where it must reclaim memory in order to keep the host operating. Slow performance is better than downtime, right?

More information can be found on the way VMware handles memory resources in this PDF.

Now, I’ll separate theory from practice at this point. In reality, most VMware environments were originally built for the purposes of consolidating their physical server infrastructure footprint. That mindset continues into today’s environments. I find that most SQL Servers are deployed on shared environments, where they share resources with non-SQL Server virtual machines. As a result, the methodology of designing for consolidation still applies to these shared environments.

VMware administrators generally do not understand the nuances of the applications that run inside the VMs that they manage. Memory oversubscription is commonplace with just about all applications. SQL Server is one of those applications where the memory requirements are generally not well understood unless you are a DBA. Put this all together, and you find that most virtualized SQL Servers are placed on environments where memory overcommitment, and therefore bouts of memory ballooning, are commonplace.

VMware gives the administrator the option to reserve the guest memory assigned to individual virtual machines so that if memory reclamation features to kick in on a host, the selected virtual machine is not impacted. In vSphere 5.5, the setting is located under the Virtual Hardware settings for the virtual machine. Simply check the box labeled ‘Reserve all guest memory (All locked).’

sql server lpim 01

I recommend that all of your production SQL Server virtual machines have this feature enabled if they are located on shared hosts. However, sometimes this decision is out of the hands of the DBA, and the virtualization administrator will not enable it on the DBA’s behalf. This is the single biggest reason why I push to enable LPIM by the DBA who fit the system profile, because it the DBA’s last line of defense against performance problems caused by VMware memory ballooning.

How to Configure LPIM

To configure LPIM, simply assign the ‘Lock Pages in Memory’ local security policy to the service account that SQL Server is configured to run under. Instructions for how to enable this feature are located at this MSDN page. If you decide to enable LPIM, you must configure the maximum server memory feature for all instances on the server. Make sure that you leave more than enough free memory for all other background tasks such as antivirus scans, system backups, etc., so that Windows can maintain efficient operation without running out of necessary free memory needed by core system processes. When in doubt, set the max memory setting low and monitor the free system memory performance counter to see how much you can increase it.


  • Last line of defense against performance problems caused by host overcommitment memory ballooning
  • Guarantees SQL Server retains the necessary memory even if Windows or VMware requires additional memory


  • Can cause system instability if Windows is caught in a state without enough memory to perform core system functions (which can be avoided if max memory is appropriately configured)
  • Potentially reduces (slightly) consolidation ratios at the physical host level


For virtualized SQL Server environments, I recommend a properly configured LPIM setup for production servers unless you have compelling reasons not to do this. For pre-production environments where performance is less of a concern, you may opt to disregard LPIM in favor of greater consolidation ratios.  I stand by my recommendations, but stress that you need to weigh the architecture and performance characteristics of your environment before you decide which option is best for you.