Mar 202017
 

On Thursday, March 23rd, at 2pm Eastern, I will be hosting a webinar with Argenis Fernandez from Pure Storage where we will talk about infrastructure challenges as organizations start to adopt SQL Server’s in-memory features, such as In-Memory OLTP and Columnstore Indexes.

Details: The in-memory features that accompany modern versions of SQL Server, such as In-Memory OLTP and columnstore indexes, are some of the most ground-breaking and exciting enhancements to SQL Server in recent memory. However, have you explored these features and found that the performance boosts are not quite as great as advertised? The dependencies on a blazing fast infrastructure underneath SQL Server have never been higher. While these features are lightning fast when used appropriately, the speed of the infrastructure underneath, mostly CPU, memory, and storage, can hold back the performance of these features. Join David Klee, Heraflux Technologies, and Argenis Fernandez, Pure Storage, to learn how to leverage these features to boost your database performance, detect and diagnose any infrastructure performance issues that might exist, and learn about possible long-term improvements to your infrastructure that can safeguard your performance for years!

Register for this exciting webinar here! I look forward to seeing you all there!

May 122016
 

SQL-GG-Side-CoverI’m pleased to announce the general availability of a new free ebook collaboration with James Green from ActualTech Media called “Modern Storage Strategies for SQL Server“.  Storage is so vitally important to SQL Server performance, but the intricacies of one side’s administrators are rarely known by the other. My goal for this ebook was to set out to educate SQL Server professionals on how the storage underneath their data actually operates, how to work with the storage administrators on topics specific to SQL Server, and show them how they can make the most of it to improve the performance and availability of their databases.

In This Gorilla Guide You’ll Learn:

  • The basics of SQL Server and database workload characteristics
  • Key considerations for storage architecture with regard to SQL Server
  • Useful tips for protecting SQL Server from disasters
  • Best practices for leveraging flash storage for SQL Server
  • How to modernize SQL Server by taking advantage of the latest updates to the platform

Download this free ebook today! Let me know what you think!

May 082015
 

A few weeks ago Bala Narasimhan from PernixData and I recorded a short conversation where we discussed the tips and tricks that both DBAs and Infrastructure Administrators need to maximize the performance of their systems without significant re-architecting of their environments.  Check it out!

May 052015
 

drive-harddisk-3My recent post on using the new DiskSpd utility to help you benchmark your storage is a great primer on how to use DiskSpd. But… what if you want to run multiple tests? For example, what if you want to run both read and write tests with a varying degree of threads or operations per threads to see the ramp up curve? What if you wanted these automated? What about putting the results from all of these tests into something that you can quickly review?

Now you can – with a PowerShell script that I’m releasing for free. I call it DiskSpd Batch.

Not only does the script help automate your test cycles, it leverages the great DiskSpd feature of saving the results to an XML file. After the testing cycles complete, it then extracts the relevant information from each test cycle and places it into a CSV output file. You can use this file to perform your own analysis on the results.

This PowerShell script is available for free over at my business web site at Heraflux.com.

Usage

First, download DiskSpd from TechNet, and extract it to your hard drive on the server that you wish to test. Read the documentation that comes with it.

Next, find the subdirectory that matches your system architecture (32 or 64-bit). This path becomes your location to the DiskSpd executable.

Download and copy the DiskSpd Batch script into a folder on your file system.

From an elevated PowerShell prompt, execute the script with the following parameters that you specify.

Syntax

Parameter Description
-Time Duration for each test cycle, measured in seconds
-DataFile Path and filename for the workload file
-DataFileSize Workload file size, in the format “500M” for 500MB, or “10G” for 10GB
-OutPath Results output file location (output file is automatically named)
-SplitIO “True” tests permutations of read and write tests in the same test cycle, in increments of 10%. “False” only tests 100% read or write test cycles.
-AllowIdle So as not to overwhelm a storage device’s ability to flush inbound I/O to disk, pause for 20 seconds between test cycles

Example

A normal test cycle might resemble the following screenshot.

The script performs numerous tests in the testing cycle, and then extracts the relevant data from the resultant XML file and creates a CSV file with the information that matters.

The output file can be opened with your favorite spreadsheet program. The columns that you will find the most interesting are:

  • WriteRatio
  • IsRandom
  • MB/s
  • IOps
  • Read MB/s
  • Read IOps
  • Write MB/s
  • Write IOps
  • Read and write latencies, broken out by percentile


Download and experiment with it! Remember, storage testing can be dangerous to an IT infrastructure. Not only can you overwhelm one server that’s doing the testing, you can also negatively impact (or even bring offline) the entire storage device and all of the other dependent systems located on it. Do not execute any storage tests in your environment outside of your own workstation until you have the express permission to execute the tests during a pre-specified window of opportunity. Heraflux is not liable for any damage or disruption to your business from you executing these tests in your environment.


Special thanks goes to my friend Mike Fal ( b | t | l ) for helping me with the PoSH in this script. He’s an incredible PoSH resource!

If you have any feedback on these scripts, or have bugs or ideas for improvements, please don’t hesitate to contact me.

Apr 012015
 

drive-harddisk-3As I mentioned in my storage benchmarking post, storage performance is one of the critical infrastructure components underneath a mission-critical SQL Server.

My defacto storage benchmarking utility has been recently updated. Last October, Microsoft released a great free utility called diskspd, and it is freely available at http://aka.ms/DiskSpd. I consider it a very solid modern replacement to the much loved SQLIO. It is a synthetic I/O subsystem workload generator that runs via a command line. It produces similar tests to SQLIO, such as read or write, random or sequential, number of threads and thread intensity, and setting the block sizes, but also gives us some significant improvements.

The benefits of diskspd include:

  • Sub-millisecond granularity on all tests, extremely useful for local SSDs and flash/hybrid storage arrays
  • Ability to perform read AND write tests in the same test, similar to IOmeter
  • Latency output metrics per read and write portions of the test, with standard deviation values
  • CPU consumption analysis by thread during the tests
  • Latency percentile analysis with percentiles 0-99 and then 99.9 up to 99.99999999 and then 100%, which is very useful for finding inflection points at the extremes which can skew test averages
  • Can define the workload placement and size in the command line parameters, which is useful to keep the test cycles compact
  • Ability to set the entropy values used in the workload file generation
  • Output is in plain text with an option to output to XML, which is extremely useful for a result we can convert and use elsewhere

So, how do we use this utility? Simple. Download the executable file from TechNet (source code is available at GitHub for those who are interested) and extract the archive to your file system.

For this example, we’ll use c:\diskspd. Copy the diskspd.exe file from the platform folder of your choice into the c:\diskspd folder to keep the pathing simple.

Some of the command line parameters are as follows.

Parameter Description
-b Block size of the I/O. Specify your unit of size (K/M/G). For example, 8KB block tests should use the flag -b8K.
-c Creates workload file(s) of the specified size. Specify your unit of size (K/M/G). For example, a 50GB workload file can be created with the parameter -c50G.
-d Test duration, in seconds. Tests longer than 30 seconds each are usually sufficient, but longer tests are suitable for production workload testing.
-h Disable hardware and software caching, from within the Windows OS layer. This mimics SQL Server behavior.
-L Capture latency information. Vitally important to SQL Server performance testing.
-o Outstanding I/Os (queue depth) per worker thread. Setting this higher increases the intensity of the test, which pushes your storage harder.
-r Random or sequential. If -r is specified. Random tests are performed. If this parameter is omitted, sequential tests are performed.
-t Worker threads. I usually set this to the number of non-hyperthreaded cores on the server.
-w Read and/or write percentage, based on the percentage of writes. If the test is a pure read test, set this to zero. If the test is a pure write test, set to 100. You can mix reads and writes. For example, if you want to perform a 20% write / 80% read test, set the parameter as -w20.
-Z Workload test write source buffers sized to a specific number of bytes.  Specify your unit of size (K/M/G). The larger the value, the more write entropy (randomness) your workload data file contains. Experiment with this value based on your system and database workload profiles. For example, 1GB source buffer sizes could use the flag -Z1G.

At the end of the line, specify the workload placement location and file name.

Other parameters exist for more advanced workload simulations, so read the great documentation that accompanies the executable.

What if you want to simulate SQL Server? If we are going to do OLTP-type workloads, use the following sample command as a place to get started.

diskspd -b8K -d2 -h -L -o4 -t4 -r -w20 -Z1G -c50G e:\diskspd\io.dat > resultssql.txt

This test executes an 80%/20% read/write test with an 8KB block size test on a 50GB workload file located on the E: drive with four worker threads, each with four outstanding I/Os, an intensity of four outstanding I/O’s per thread, and with a write entropy value seed of 1GB. It saves the output text into a results.txt output file for reference.

You can also save this into a batch or PowerShell script to make this test easily repeatable.

Execute this with Administrator privileges or else you might see an error code about needing permissions to write the file or else it might take longer.

diskspd_01

The output presents some fantastic granular data. Let’s investigate the sections.

diskspd_02

The header simply shows the parameters that you used to run the individual test, including the command line itself.

diskspd_03

The next section shows the CPU consumption, broken out by user and kernel time by worker thread, for the test cycle.

diskspd_04

The next section gets more interesting. We now have the IOPs and throughput metrics, broken out by worker thread and by read and write, for the test. IOPs by read and write matters the most here, with throughput a close second. The operations by thread should be very similar. Higher IOPs and throughput are good values.

diskspd_05

The last section is the most interesting. It presents a percentile analysis of the storage performance from the minimum value up to the maximum value. Look for points of inflection in the data. In this test, you can see that we had a significant inflection point between the 99th and 99.9th percentile. Statistically speaking, less than one percent of our I/Os in this test had a latency greater than 4.123 milliseconds, and less than one tenth of one percent had this latency greater than 11.928 milliseconds.

In conclusion, storage testing with diskspd has never been easier! This utility is now my defacto storage benchmarking tool. Give it a try!

Stay tuned – I’ve got a surprise coming!

Mar 232015
 

Today I’d like to announce that I have been selected as a PernixPro for 2015! It’s a program similar to VMware’s vExpert, Microsoft’s MVP, and other community awards for those that help spread the word about PernixData. I’m very proud to be a part of this program!

For those new to PernixData, the FVP product is a great means to boost storage performance underneath virtual machines. It can leverage local host-based SSDs for I/O read and write caching. It can also leverage host memory for I/O caching as well. Redundancy capabilities include synchronous mirroring of cached data to other hosts so that if a host fails, no data is lost. This platform is incredible, and Gareth wrote about this too but i have been using it to boost I/O performance underneath virtual SQL Servers for quite some time now. I look forward to some research soon showing some performance features once the home lab is back online soon!

PernixPro_resized