Apr 172015
 

heraflux 300x100I’m proud to announce my company, Heraflux Technologies, is looking to fill the position for Vice President of Business Development. When we first started our company in September 2013, we had a number of short and long term goals about how we wanted to grow and scale our business. We wanted to serve our clients as the strategic adviser on the convergence of infrastructure and data for a variety of organizations. As we have expanded over the last year and a half and tackled numerous projects for some great clients, we want to make sure we continue to achieve the highest standards for our current and future clients by appropriately balancing current projects as well as inbound requests.

We are ready to take the next step in evolving our business and find the right person who can help us continue to scale. On our employment page, we have posted an opening for the position of Vice President of Business Development. We are seeking candidates with strong sales and business development experience around the convergence of infrastructure and data. Ideally, the person has strong experiences in the Microsoft SQL Server and Hyper-V, VMware vSphere, and storage industries, with both platform and product knowledge and experience with managing professional services in these silos. Strategic leadership experience in understanding the trends in these industries and adapting premier professional services offerings to maintain a strong niche in technology leadership is a must.

If you think you are qualified to help a strategic consulting group grow, or know someone who might be interested, apply here!

Apr 152015
 

Recently, my friend Mike Fal ( b | t ) released a PowerShell script that can sample the counter inside SQL Server that one of my favorite database benchmarking utilities, HammerDB, uses to produce the transactions per minute counter value that the benchmarking utility outputs.

hammerdb1

I thought I’d counter with a means to perform this task directly within SQL Server, and store the results inside a database so that a mathematical analysis can be performed quite quickly and repeatably. I always like to have multiple methods to perform any given action, so that you can select the best tool for a given scenario.

As Mike notes, the best counter used to store this value is “Batch Requests/sec“, and is very straightforward to collect. To perform this automated collection, follow these easy steps.

Create a container database, or opt to use one of your utility databases.

Create a table to store the results. Use or adapt the following DDL for your uses.

CREATE TABLE [dbo].[BatchReqPerSecLogger](

[ID] [bigint] IDENTITY(1,1) NOT NULL,

[BatchReqPerSec] [bigint] NULL,

[DT] [datetime] NULL CONSTRAINT [DF_BatchReqPerSecLogger_DT] DEFAULT ( GETDATE() ),

CONSTRAINT [PK_BatchReqPerSecLogger] PRIMARY KEY CLUSTERED

(

[ID] ASC

)

Next, create a SQL Server Agent job that executes the following command against this new database every ten seconds.

INSERT INTO dbo.BatchReqPerSecLogger (BatchReqPerSec)

SELECT

cntr_value

FROM

sys.dm_os_performance_counters

WHERE

counter_name = ‘Batch Requests/sec’

Disable the job until you need it, and enable it when you begin your tests.

You can create a view on this data that gives you the insight into the database performance trends with a simple query. This query leverages statistics functions found in SQL Server 2012 and beyond. This view shows you the batch requests per minute numbers when you estimate your batch requests per minute by sample period.

CREATE VIEW dbo.vBatchReqPerSec AS

SELECT

ID, BatchReqPerSec, DT,

LAG(batchreqpersec) OVER (ORDER BY DT) AS PrevSample,

LAG(DT) OVER (ORDER BY DT) AS PrevSampleDT,

DATEDIFF(ms,LAG(DT) OVER (ORDER BY dt),DT)/1000. AS SecDiff,

(batchreqpersec – LAG(batchreqpersec) OVER (ORDER BY DT)) /

(DATEDIFF(ms,LAG(DT) OVER (ORDER BY dt),DT)/1000.) as TPS,

((batchreqpersec – LAG(batchreqpersec) OVER (ORDER BY DT)) /

(DATEDIFF(ms,LAG(DT) OVER (ODER BY DT),DT)/1000.)) * 60 AS TPM

FROM

dbo.BatchReqPerSecLogger

You can now query against this data based on the date and time ranges for your test, and perform whatever analysis you like!

SELECT

*

FROM

dbo.vBatchReqPerSec

WHERE

DT >= CAST(‘2015-03-03 7:00′ AS DATETIME)

AND DT < cast(‘2015-03-03 7:00′ AS DATETIME)

ORDER BY

DT

hammerdb2

Now you can quickly perform any sort of analytics that you want on the raw data. Just remember to turn off the Agent job when you are done.

Apr 082015
 

Today I presented a new session on the various paths to becoming a consultant to the PASS Professional Development Virtual Chapter. I thoroughly enjoyed presenting this session, and the questions were great! Plus, the coordinators for the VC have the session up on YouTube already!

The slides for this presentation are available for you to download here, and the sample rate estimator that I put together is available for you here. Please let me know if you have any more questions!

Apr 032015
 

I am very proud to announce that one of my PASS Summit 2014 sessions was selected as one of the Top 10 of the conference, and the presentation is now available on the Best of PASS Summit 2014 list. The session is “Right-Sizing Your SQL Server Virtual Machine”, and is available for you to watch at PASStv via YouTube here:

The top 10 sessions of the conference are (in no particular order):

Advanced Data Recovery Techniques

Paul Randal – SQLskills

World’s Worst Performance Tuning Techniques

Kendra Little – Brent Ozar Unlimited

Advanced Modeling with Analysis Services Tabular

Alberto Ferrari – SQLBI

BI Power Hour

Matt Masson – Microsoft

Matthew Roche – Microsoft

Building Bullet-Resistant SSIS Packages

Tim Mitchell – Linchpin People

Five Execution Plan Patterns to Watch For

Erin Stellato – SQLskills

Inside Ola Hallengren’s Maintenance Solution

Ola Hallengren, Saxo Bank

Query Store – A New SQL Query Tuning Feature

Conor Cunningham – Microsoft

Query Tuning Mastery: Manhandling Parallelism, 2014 Edition

Adam Machanic – Data Education

Right-Sizing Your SQL Server Virtual Machine

David Klee – Heraflux Technologies

Congrats to everyone who was fortunate enough to present at the conference and to those selected for this list! I wish everyone luck in their submissions for the Summit 2015.

Apr 012015
 

drive-harddisk-3As I mentioned in my storage benchmarking post, storage performance is one of the critical infrastructure components underneath a mission-critical SQL Server.

My defacto storage benchmarking utility has been recently updated. Last October, Microsoft released a great free utility called diskspd, and it is freely available at http://aka.ms/DiskSpd. I consider it a very solid modern replacement to the much loved SQLIO. It is a synthetic I/O subsystem workload generator that runs via a command line. It produces similar tests to SQLIO, such as read or write, random or sequential, number of threads and thread intensity, and setting the block sizes, but also gives us some significant improvements.

The benefits of diskspd include:

  • Sub-millisecond granularity on all tests, extremely useful for local SSDs and flash/hybrid storage arrays
  • Ability to perform read AND write tests in the same test, similar to IOmeter
  • Latency output metrics per read and write portions of the test, with standard deviation values
  • CPU consumption analysis by thread during the tests
  • Latency percentile analysis with percentiles 0-99 and then 99.9 up to 99.99999999 and then 100%, which is very useful for finding inflection points at the extremes which can skew test averages
  • Can define the workload placement and size in the command line parameters, which is useful to keep the test cycles compact
  • Ability to set the entropy values used in the workload file generation
  • Output is in plain text with an option to output to XML, which is extremely useful for a result we can convert and use elsewhere

So, how do we use this utility? Simple. Download the executable file from TechNet (source code is available at GitHub for those who are interested) and extract the archive to your file system.

For this example, we’ll use c:\diskspd. Copy the diskspd.exe file from the platform folder of your choice into the c:\diskspd folder to keep the pathing simple.

Some of the command line parameters are as follows.

Parameter Description
-b Block size of the I/O. Specify your unit of size (K/M/G). For example, 8KB block tests should use the flag -b8K.
-c Creates workload file(s) of the specified size. Specify your unit of size (K/M/G). For example, a 50GB workload file can be created with the parameter -c50G.
-d Test duration, in seconds. Tests longer than 30 seconds each are usually sufficient, but longer tests are suitable for production workload testing.
-h Disable hardware and software caching, from within the Windows OS layer. This mimics SQL Server behavior.
-L Capture latency information. Vitally important to SQL Server performance testing.
-o Outstanding I/Os (queue depth) per worker thread. Setting this higher increases the intensity of the test, which pushes your storage harder.
-r Random or sequential. If -r is specified. Random tests are performed. If this parameter is omitted, sequential tests are performed.
-t Worker threads. I usually set this to the number of non-hyperthreaded cores on the server.
-w Read and/or write percentage, based on the percentage of writes. If the test is a pure read test, set this to zero. If the test is a pure write test, set to 100. You can mix reads and writes. For example, if you want to perform a 20% write / 80% read test, set the parameter as -w20.
-Z Workload test write source buffers sized to a specific number of bytes.  Specify your unit of size (K/M/G). The larger the value, the more write entropy (randomness) your workload data file contains. Experiment with this value based on your system and database workload profiles. For example, 1GB source buffer sizes could use the flag -Z1G.

At the end of the line, specify the workload placement location and file name.

Other parameters exist for more advanced workload simulations, so read the great documentation that accompanies the executable.

What if you want to simulate SQL Server? If we are going to do OLTP-type workloads, use the following sample command as a place to get started.

diskspd -b8K -d2 -h -L -o4 -t4 -r -w20 -Z1G -c50G e:\diskspd\io.dat > resultssql.txt

This test executes an 80%/20% read/write test with an 8KB block size test on a 50GB workload file located on the E: drive with four worker threads, each with four outstanding I/Os, an intensity of four outstanding I/O’s per thread, and with a write entropy value seed of 1GB. It saves the output text into a results.txt output file for reference.

You can also save this into a batch or PowerShell script to make this test easily repeatable.

Execute this with Administrator privileges or else you might see an error code about needing permissions to write the file or else it might take longer.

diskspd_01

The output presents some fantastic granular data. Let’s investigate the sections.

diskspd_02

The header simply shows the parameters that you used to run the individual test, including the command line itself.

diskspd_03

The next section shows the CPU consumption, broken out by user and kernel time by worker thread, for the test cycle.

diskspd_04

The next section gets more interesting. We now have the IOPs and throughput metrics, broken out by worker thread and by read and write, for the test. IOPs by read and write matters the most here, with throughput a close second. The operations by thread should be very similar. Higher IOPs and throughput are good values.

diskspd_05

The last section is the most interesting. It presents a percentile analysis of the storage performance from the minimum value up to the maximum value. Look for points of inflection in the data. In this test, you can see that we had a significant inflection point between the 99th and 99.9th percentile. Statistically speaking, less than one percent of our I/Os in this test had a latency greater than 4.123 milliseconds, and less than one tenth of one percent had this latency greater than 11.928 milliseconds.

In conclusion, storage testing with diskspd has never been easier! This utility is now my defacto storage benchmarking tool. Give it a try!

Stay tuned – I’ve got a surprise coming!