Dec 172014

tribal-sql-redYesterday morning I opened my email and was very pleased to find that I have been nominated for the 2014 Tribal Awards from Simple-Talk and SQL Server Central, in the category of ‘Beyond the call of duty / Outstanding Contribution‘. WOW!  I’m honored! Now, I’m not going to tell you to vote for me. Vote for the person that you feel deserves this award, and vote for the other categories as well! The categories include:

  • Best Article
  • Best Blog
  • Fantasy Mentor
  • Best Presentation
  • Beyond the Call of Duty
  • Best New Community Voice
  • Best New Book
  • Best Free PowerShell Script
  • Best Twitter Account
  • and last but not least, Best Costume

There are some incredible people in this list, all who have contributed an incredible amount of knowledge. These awards are a fantastic way to show your support for all of the great participants in the SQL Server community that I care so much about, so go vote for your favorites in each category! Voting closes on January 2nd – so vote now! The winners will be announced on Simple-Talk and SQL Server Central on January 20th.

Dec 152014


You should benchmark your storage immediately. If you are a database administrator, you should benchmark it yesterday. And today. And next week.

Well, that’s exaggerating things a bit… but storage performance should be at the core of your ongoing performance metric collection and test process. Storage performance matters that much to the business your data powers. If it’s not performing well, it’s your duty to document and then work with the folks responsible to do something about it.

Why does it matter so much?

Consider a modern application server stack.

application stack

Profile the items in this stack for their performance characteristics. Generally speaking, storage performance is slower than memory or CPU. Application code can be all over the place so I’ll disregard that layer for this discussion.

db_statusYour storage slows down your SQL Server. It’s the inherent nature of the technology. So, storage is slow? Why? It’s the most complex component in the stack, has the most moving parts, and historically has the slowest components. Modern flash arrays are working in our favor nowadays, but it’s still slower than memory or CPUs.

Why is it so complex? It’s not just a black box, as usually represented on a lot of system diagrams. Expand the storage component in this stack. It’s not that simple.

storage_is_complexA modern storage platform has many layers and parts, all of which are endlessly customizable by the organization and the storage administrators who architect and maintain such environments. As DBAs, we might not be able to see deep into this stack (or need/care to), but we get the end result of the stack in the form of performance underneath our data.

Three metrics matter to database servers (in this order):

  • Latency (measured in milliseconds)
  • I/Os per second (measured in IOPs)
  • Throughput (Megabytes per second)

The round trip path time from your server to disk and back is the latency. High latencies mean that you highly concurrent database server just cannot get data from the storage back fast enough. It could mean a lot of things – the path between the storage and server is clogged, the storage just can’t keep up with the request (or the requests running in the background), or something else is problematic. Usually these round trips are measured in milliseconds, unless you’re on a flash or flash-hybrid array, and then the performance might be sub-millisecond.

I/Os per second is how many of these transactions you can process concurrently in a second. Faster is always better. The more you can support and sustain, the higher the overall performance and concurrency and scale improve on your servers. For perspective, a single desktop-grade SATA disk can sustain about 100 IOPs. A 15K RPM SAS disk can sustain 175 to 210 IOPs. A lower-end IP-based SAN can usually sustain in the 2500 IOPs range. Flash SANs can handle to 1M IOPs and beyond!

Throughput is a combination of block size and IOPs. If a storage unit can handle a 1000 IOPs read stream with 4KB blocks, you should be reading at 4096 KB/s, or 4MB/s throughput.

IOPs and throughput can also vary by the block size of the transaction. Most Windows NTFS-formatted volumes are formatted with the default 4KB allocation unit size. Microsoft recommends a 64KB NTFS allocation unit size for SQL Server volumes. Generally speaking, if a block size stream on a SAN/NAS doubles, the IOPs are cut in half or the throughput doubles if the storage can handle the performance. For example, on the local SSD in my workstation, we can see this very quickly with one of the utilities I’ll be exploring on this blog soon – SQLIO.


The parameter -bX is the block size, in KB. The two tests show a sequential read test on my local SSD with both 4KB and 8KB block sizes. The first test did not stress the storage to the maximum performance, so we had headroom to go up, and it did. The throughput doubled with the block size doubled.

From within SQL Server, you can see this fairly easily. SQL Server keeps track of disk latencies by database file, and you can query to see the average values per drive and by file. This query came from Glenn Berry’s diagnostic queries.

– SQL Server 2012 Diagnostic Information Queries
— Glenn Berry
— November 2014
— Last Modified: November 3, 2014
— Twitter: GlennAlanBerry
— Drive level latency information (Query 24) (Drive Level Latency)
— Based on code from Jimmy May
SELECT [Drive],
WHEN num_of_reads = 0 THEN 0
ELSE (io_stall_read_ms/num_of_reads)
END AS [Read Latency],
WHEN io_stall_write_ms = 0 THEN 0
ELSE (io_stall_write_ms/num_of_writes)
END AS [Write Latency],
WHEN (num_of_reads = 0 AND num_of_writes = 0) THEN 0
ELSE (io_stall/(num_of_reads + num_of_writes))
END AS [Overall Latency],
WHEN num_of_reads = 0 THEN 0
ELSE (num_of_bytes_read/num_of_reads)
END AS [Avg Bytes/Read],
WHEN io_stall_write_ms = 0 THEN 0
ELSE (num_of_bytes_written/num_of_writes)
END AS [Avg Bytes/Write],
WHEN (num_of_reads = 0 AND num_of_writes = 0) THEN 0
ELSE ((num_of_bytes_read + num_of_bytes_written)/(num_of_reads + num_of_writes))
END AS [Avg Bytes/Transfer]
FROM (SELECT LEFT(UPPER(mf.physical_name), 2) AS Drive, SUM(num_of_reads) AS num_of_reads,
SUM(io_stall_read_ms) AS io_stall_read_ms, SUM(num_of_writes) AS num_of_writes,
SUM(io_stall_write_ms) AS io_stall_write_ms, SUM(num_of_bytes_read) AS num_of_bytes_read,
SUM(num_of_bytes_written) AS num_of_bytes_written, SUM(io_stall) AS io_stall
FROM sys.dm_io_virtual_file_stats(NULL, NULL) AS vfs
INNER JOIN sys.master_files AS mf WITH (NOLOCK)
ON vfs.database_id = mf.database_id AND vfs.file_id = mf.file_id
GROUP BY LEFT(UPPER(mf.physical_name), 2)) AS tab

You can see each drive represented here on my storage (and it’s busy in the background on one of the volumes so I can drive up some latency counters here). The latency counters are in milliseconds. Remember, these are averages, so if you have a minor blip while it runs, it can throw off the averages significantly.

sql server latency

help-about-3That leads me to a good transition – what do you use to actually test the storage? A lot of options are available – and most of them are free.  Over the next month or so, we’ll be exploring here how to use some of my favorite free disk benchmarking utilities, including the following:

Microsoft SQLIO (with SQLIO Batch)



DiskSpd (new from Microsoft)

pictograms-hazard_signsWarning! Disk benchmarking utilities can put a significant strain on your storage subsystem. Unstable storage devices can actually crash under the load. Stable storage can suffer performance degradation of your workload, as well as everything else running on it. DO NOT run any storage test on a production environment without the express permission of those responsible for the environment. I will not be responsible for any disruption in service to any system that has a stress test run on it inappropriately.

applications-engineering-3Now, with that out of the way, stay tuned for more posts soon on how to benchmark the storage underneath your SQL Servers, and how to simulate OLTP traffic on various disks with some advanced parameters of these tools!

Dec 082014

This month’s PASS High Availability and Disaster Recovery virtual chapter presentation is by Edwin Sarmiento ( b | t | l ) entitled “A Lap Around Multi-Subnet Clustering for SQL Server DBAs“. The presentation is Tuesday, December 9th at 12pm Central time. Come learn more about this advanced topic with us! Register for this free presentation at

Deploying a Windows Server Failover Cluster (WSFC) on Windows Server 2008/2012 has become a lot easier with the availability of online resources. And with more advanced features in WSFC, it has become both a high availability and disaster recovery solution for SQL Server databases. In this session, we will learn the underlying concepts and principles when designing a multi-subnet cluster to address both high availability and disaster recovery requirements. With the knowledge gained in this session, you will be able to properly design a multi-subnet WSFC stretched across multiple data centers that meet your overall recovery objectives.

Edwin M. Sarmiento is a Microsoft Certified Master & SQL Server MVP from Ottawa, Canada. He specializes in high availability, disaster recovery and system infrastructures running on the Microsoft server technology stack, ranging from Active Directory to SharePoint and anything in between. He lives up to his primary mission statement:“To help people and organizations grow and develop their full potential.”

Nov 282014

PASS_VC_VirtualizationOn December 10 at 11am CST I will be presenting the next PASS Virtualization Virtual Chapter monthly webinar, where you can ask me any of the burning questions about virtualized SQL Server that you’ve been just itching to ask! It’s an open questions and answers session. Please submit your questions to Tom Norman, chapter lead, at, and we’ll be sure to get them answered for you! RSVP here for this free webinar, and I’ll see you all then!

Nov 182014

I don’t care what technology or technologies you enjoy using in your daily job. If there’s an enthusiastic community behind it, you need to become part of it. Your job and career will be better as a result.

I’ll start this personal post with a mixed tale from my childhood. This topic came about from an unexpected email I received yesterday. I got news that two old friends of mine had passed away in the last month. Both were untimely, as they were not much older than me.  These were two people from my childhood, before I could drive, and before there was an official internet. This was back from the BBS days. This was pre-widespread Internet. For some of you – yes, that time did actually exist.

One of these people was a gifted entrepreneur who ran middle Georgia’s (where I grew up) first major bulletin board system and eventually the first regional Internet provider. The other was a great musician and all around person. We all competed to dial in to chat with each other each night on her BBS system. This was when I was in the influential twelve to twenty years old range.

I got to know some truly great people – personally – and I became fast friends with them for life. But the one who had the most influence on me was the owner of this system – affectionately known as sysop. She owned and managed the BBS, owned a small architectural engineering firm, and was an active environmental and political activist. She was driven and determined to build the best environment for her passions and goals in life. The BBS came from her desire to help people connect and interact. Her personality, sense of involvement, and open arms attracted everyone in the regional middle Georgia community (at least us nerds who were BBSing) and helped us all build a community around this system. We all wanted to be there with the group.

After several years of us all fighting to get access, her adding crazy amounts of phone lines into her house, and seemingly infinite late-night chats on every topic imaginable, she got the great idea to hold a gathering at her place to help us all meet face to face instead of just nickname and personality to nickname and personality. People of every age, race, political bias, orientation, and background came to meet each other. It was amazing. I was easily a third of the average age in the room, but felt more at home with these people than I did with anyone at school. I felt like I was home.

This one party led to more, until it became a routine to hang out at sysops house and catch up with this community. This person taught me the value of a number of core life skills, namely community building, entrepreneurship, and determination. I never realized how amazing this experience was until much later in life.

Fast forward about ten years. I get into the full-time work force and discover that I really enjoy working with data. I focused my sights on Microsoft SQL Server, and then started to poke around the net for knowledge. Step back a bit and you see a core group of people contributing to the growing knowledge base for the emerging platform through books and technical blog posts. This group was always encouraging people to learn more, especially those who were new at SQL Server, and did so in the most welcoming and inviting ways.

Now fast forward another ten years. IMHO, the SQL Server community has evolved into the most tight-knit, close, group of technologists on the planet. The amount of information sharing is unparalleled. The welcome newbies receive when asking for knowledge is amazing. It really is like a family. When I fully immersed myself in the community at my first PASS Summit and spoke with Kevin Kline, for the first time since the BBS days I felt like I was home again. It’s hard to describe. It’s the same feeling of community and family that I had with that group back in the day, but had not felt since then. With each Summit and SQL Saturday after that, this sense of community grows more and more. It’s great. I want to keep it up. I want all of you to keep doing what you’re doing. The SQL Server community feels like my family, and for that, I thank you.

Chris and Kelly, RIP. You will be missed by the people whose lives you changed.

Nov 142014

PASS Summit 2014 Logo_930x260This past week of conferences was simply amazing. It was the best week of my career. Period. Now that I’m close to being recovered, I thought I’d put together a short recap of the week.

(I seem to say this after every PASS Summit. Every year has been better than the last. What will next year bring? I am excited to find out!)

microsoft mvp tallThe first part of the week, I attended the Microsoft MVP Global Summit from Sunday to Tuesday. The details of the Global Summit and the topics that we covered are all unfortunately under NDA, but rest assured, I am very excited for the future of the products that I have dedicated this portion of my career to.

Tuesday evening to Friday was the PASS Summit 2014 at the Seattle Convention Center, the largest SQL Server conference of the year. My goals were simple – network, present, and network some more.

Welcome to the PASS Summit 2014

Welcome to the PASS Summit 2014

My company, Heraflux Technologies, co-sponsored a booth with Denny Cherry and Associates, SQLHA, and Fortified Data, which we called Consultants Corner.

Ben DeBow relaxing in our booth

Ben DeBow relaxing in our booth

We had a blast meeting everyone that stopped by the booth! I thoroughly enjoyed the varied conversations around virtualization migrations, performance tuning, and rubber chickens.

The largest presentation that I’ve had so far was Wednesday at 1:30pm. Have you ever had one of those moments where you stop what you are doing and “is this really happening” starts going through your head? I had this moment when I was approaching the doorway for the room where I presented the session. The room had all six doors open and I saw nothing but chairs covering the entire view. I had to stop and pause for a moment because of the wave of emotion. Exhilarated… excited… humbled… Nothing clearly describes how I felt. It was a very surreal moment. When I walked through the room and onto the stage about 45 minutes early, this is the view I saw.

summit 2014 view from stage

This session, “Right-Sizing Your Virtual SQL Server”, was presented to solid attendance in one of the very large conference rooms. It was also broadcast live to the world on the streaming video page at PASStv, which you can watch here. That’s no self-imposed pressure at all, right? The audience and I had a fun time with the 75-minute session going through all of the nuances of why and how virtual SQL Servers should be “right-sized” to maximize the performance and consolidation. The attendees were engaged and energetic, and asked some wonderful questions throughout the session. I really appreciate each and every one of you who came to see me speak!

It was in this session that I announced my company’s intention to release a product that can run in the background on a physical or virtual SQL Server and determine an approximate number of virtual CPUs and memory allocation for the server. I am proud to announce that it will be available for FREE. If you would like to sign up to become a beta tester once it is ready to release, please sign up by just sending me a quick message with your email address at the contact me page here.

Throughout the week, I attended a number of great gatherings after hours. Denny Cherry, SQL Sentry, Fortified Data, SIOS, and others all had some fantastic gatherings at venues near the convention center. Your contributions to community building are much appreciated! We all had a blast, and you helped us all stay out WAY too late most evenings!

On Friday at 4pm I presented another session entitled “Achieving Top Performance with your Virtual SQL Servers” to a great group of die-hard attendees in the last session block of the conference. Being the last session of the conference, I was going to be genuinely thrilled if a single person showed up, but much to my amazement, the room was mostly full! You all were great. The interaction level was high, the questions top notch, and I wanted to keep going until midnight to make sure that I had everyone’s questions answered.

Saturday morning wrapped up the week with a nice, quiet breakfast at Pike’s Market overlooking the waterfront and friendly banter with a few friends from the #sqlfamily. I did not want the week to end. I spent the rest of the day on two uneventful flights back to Omaha.

The week of this conference is the most amazing week of the year for me, and I am going to attend this conference each and every year going forward. This event is created by the SQL Server community, for the community, which is quite unique in the tech field. For those of you who have never been, or are looking for the best SQL Server ecosystem training in the world, I cannot stress enough the value of this conference. You should leave the conference with at least three things that will directly improve something you are working on or manage in your environments. The friends you make will last a lifetime. The professional contacts will help you when you’re in a bind in your business.

As for next year, the next PASS Summit is back in Seattle on October 27-30, 2015. I can’t wait to see you all there!

Landing at Seatac airport (cleaned up by Brandon Leach)

Landing at Seatac airport (cleaned up by Brandon Leach)

Chris Shaw and John Morehouse setting up for their precon

Chris Shaw and John Morehouse setting up for their precon

Saturday morning breakfast on the water

Saturday morning breakfast on the water

See you all next year!

See you all next year!