Jan 202017
 

I’m proud and humbled to announce that my company, Heraflux Technologies, is hiring for a Data Platform Solutions Architect!

We are looking for a highly qualified technologist who is comfortable with database technologies, such as Microsoft SQL Server and other DBMS platforms, and infrastructure technologies, such as virtualization, converged platforms, public cloud, and system administration.

The Solutions Architect is accountable for working with a number of organizations in a variety of industries to improve the availability, performance, and efficiency of the infrastructure stack underneath the application.

This position is more of a SQL Server focused role, and only senior-level SQL Server administrators should apply. More details are available here, and we look forward to hearing from you!

Jan 182017
 

If you’re in the Boston area tomorrow, I urge you to go to this year’s Virtualization Technology Users Group (VTUG) Winter Warmer event tomorrow at Gillette Stadium. It’s better than a regional VMUG, and not just VMware specific. It’s covering all things infrastructure, virtualization, and cloud, and it’s held at the stadium to boot! It’s a great event, and I’ll be there in the afternoon at the Microsoft booth answering questions on virtualization, cloud, and business-critical apps!

VTUG Winter Warmer

When: January 19, 8:00am to 6:00pm

Location: Gillette Stadium, Patriot Place, Foxboro MA (GPS 1 Patriot Place)

Register now – for FREE – and I’ll see you there tomorrow!

 Posted by at 3:25 pm  Tagged with:
Dec 192016
 

I’m pleased to announce a new collaborative whitepaper with ScaleArc entitled “Improving uptime and performance of legacy apps on Microsoft SQL Server“.

Performance tuning inefficient legacy applications can be a tremendous drain on development resources in an organization. Handling availability of the database layer of these applications can be an even larger challenge. ScaleArc can help improve both challenges by improving the availability of the database layer and accelerating common repetitive database queries from the application.

The past five years or so has seen a shift in people’s expectations for application performance. As they’ve seen their personal apps deliver continuous uptime and high performance, they’ve grown more frustrated with the slower performers in business.

Despite these business apps being critical to corporate performance, users have “just dealt with it” for years whenever they’re slow and unreliable. One challenge, of course, has been that these enterprise apps are much more complex than the personal apps, making it harder to update them. Plus, making the necessary changes to “update” the application might be out of the hands of an organization when they don’t control the source code, or are costly and time prohibitive to update when they do. A compelling alternative has emerged. Rather than recoding apps for improved uptime and performance in a database environment, consider the benefits of database load balancing software. A pioneer in this technology, ScaleArc has set out to make business application platforms more highly available and perform better by managing the traffic into the database servers. The ScaleArc software acts as an intermediary between the application and database layers. Sitting in between, it buffers database outages from impacting application availability and transparently accelerates the application’s performance, all without code or usability changes. This whitepaper explains the benefits of the ScaleArc software and shows benchmarking results of one feature set – app-transparent caching. In our tests, the ScaleArc software enabled a 5x improvement in the operations processed per minute and a drop in CPU consumption from a peak of 100% to a peak of 60%.

Download this new whitepaper today!

Oct 212016
 

vmworld-2016This past week I had the pleasure of attending our first VMworld Europe conference, held this week at the Fira Gran Via in Barcelona, Spain. What a great experience!

20161017_085815

The key takeaways from the conference are quite interesting. The announcements from last and this week indicate some challenges in portions of their strategy going forward. This photo from the speaker room sums up our thoughts on the state of VMware right now – many of the important puzzle pieces are in place, but there are some significant holes that need to be addressed for a more complete picture.

20161018_130601-copy

Sessions

20161018_103107-copyFirst up, I presented a session entitled “Performance Tuning and Monitoring for Virtualized Database Servers” with Thomas LaRock from Solarwinds. We talked about the need to monitor each layer at and underneath the databases for becoming proactive in active performance troubleshooting and monitoring. We presented to a packed room and fielded a number of great questions afterwards.

A little later in the day, I presented a session entitled “Performance Perspectives” for the vBrownBag TechTalks, where I talked about how VM admins need to understand that measuring performance statistics only from the hypervisor presents only a portion of the actual performance of the system.

[embedyt] http://www.youtube.com/watch?v=KhYxiZPSvGE[/embedyt]

At the end of the day, I sat on a panel session with Patric Chang and Jonathan Flynn from SanDisk, and Jase McCarty from EMC entitled “Running Business Critical Applications and the Software Defined Data Center on Hyper-Converged Infrastructure and VSAN” where we discussed the implications of business-critical applications and their intersection with hyperconverged and all-flash systems.

vmworld-sandisk-session_photo-small

Wednesday

20161019_104942-copyWednesday was also a day filled with presentations and meetings. I started with a fun and action-packed session with Michael Corey entitled “Monster VMs (Database Virtualization) Doing IT Right” where we gave a rapid-fire stream of useful tips and tricks on maintaining maximum performance of virtualized SQL Server and Oracle VMs.

20161019_110304-copy

That afternoon I presented a repeat session of the “Performance Tuning and Monitoring for Virtualized Database Servers” to another near capacity room. Thank you to all of the die-hard attendees that came for the repeat session when you could not get into the previous session.

Next, VMworld has a lounge area that they call “Meet the Experts“. Michael Corey and I chatted with numerous attendees about their unique challenges with virtualizing business-critical apps, especially databases, and hopefully our answers will help them go and solve some of their concerns!

20161019_073942-copy

Announcements

At this conference, VMware announced the next version of their flagship virtualization suite, vSphere 6.5. The whole list of updates and improvements are found here. I feel that this release is a solid evolutionary step towards the future of the on-premises software defined datacenter. I am exceptionally happy about the new REST-based API for managing the environment. We’ve got some ideas that we’re working on where this will come in handy! VMware says general availability of this release is scheduled for later on this year.

However, the overarching buzz at the conference was from VMware’s announcement last week. VMware is partnering with Amazon AWS to provide the means to extend the on-prem VMware deployments to the Amazon public cloud in order to create a seamless hybrid cloud approach. The reaction from the attendees was mixed. I am going to save my thoughts on this announcement for an upcoming blog post. It does open a lot of questions about target platform performance, database licensing implications, and operational management.

Barcelona

I did get to venture outside of the convention center and briefly explore Barcelona on Monday with several other SQL Server presenters. Barcelona is an incredible city. I wish I had a full month to just go exploring!

Thanks for having me speak at the conference VMware! This was my first trip to VMworld Europe, but it will certainly not be the last!

20161017_144823 20161017_151920 20161017_110839

Sep 062016
 

ITnDevConnections_logo_TylerOptimized_236x59I’m proud to be speaking at this year’s IT/Dev Connections conference in Las Vegas during the week of October 10th! This conference is all about mastering various tools and products from Microsoft, and I’m delivering two sessions in the data platform track.

Business Continuity for Virtual SQL Servers

Wednesday, October 12 – 11:00 am – 12:15 pm

Do your SQL Servers have a different high availability or disaster recovery strategy than the rest of the servers, and the differences cause some friction with the infrastructure admins? SQL Server provides many features for improving high availability and disaster recovery resiliency, and they continue to evolve and change with each release. However, do these strategies and features complement or conflict with the current infrastructure strategy? How do these layers work together to protect the business? What if you can augment (or possibly replace) a complex SQL Server BC solution with a simpler, but just as effective, solution using some of the virtualization features available in today’s virtualized environments? This session will discuss strategies for improving your SQL Server uptime while reducing the complexity of your overall SQL Server continuity strategy.

SQL Server Infrastructure Performance

Thursday, October 13 – 10:15 am – 11:30 am

The compute infrastructure beneath your critical SQL Servers is just one layer in the application system stack, but if issues are rampant under your data, your database performance is guaranteed to suffer, and your end user application performance experience is sure to be poor. As a DBA, we are usually at the mercy of the groups managing this hidden layer, but it does not have to be this way. Learning more about these layers of the system stack – physical compute server, virtualization, storage, interconnects, and networking – will help you understand how the infrastructure responds to a query request, and will make you a stronger database professional. You can also indirectly (or even directly) measure the performance of these layers to gain critical insight into their peculiarities.

Register now for this great conference. I hope to see you all there!

May 012016
 

SQL-Server-2012As Bala Narasimhan from PernixData and I discussed in a webinar last week, SQL Server 2005 hit end of it’s extended support life on April 12th of this year. Any existing SQL Server 2005 instances should be first and foremost in an IT organization’s agenda for modernization this year. If you still have 2005 instances in your environment, the organization is now at risk. If undiscovered bugs or other issues come out of nowhere and cause trouble, your only course of action is to upgrade. So, why wait? Start the process now!

If you need ammunition to help convince your organization about the need to upgrade, just look at the lack of official support. That fact alone should be enough to push the organization to upgrade. If you need additional help, take a solid look at all of the new and compelling features included in SQL Server since 2005!

As discussed in the webinar, a thorough checklist should be developed and scrutinized to help an organization through the upgrade process. Following such a checklist can help you identify and remediate any challenges that might come from the upgrade process.

This list is by no means exhaustive. It’s my personal high-level checklist of things to watch out for, and feel free to add to this checklist anything that matters to you and your organization.

  • Migration process
    • Side-by-side migrations are generally better than in-place upgrades.
    • In-place upgrades leave few roll-back strategies other than VM snapshots, while side-by-side migrations give you the chance to practice the migration and explore test copies of the application to look for any functionality changes.
    • Older installations of SQL Server 2005 could be 32-bit, or the Windows OS version could be 32-bit as well. Migrations help you finally get to 64-bit stacks and leave 32-bit in the dust.
    • What is your upgrade path? Backup and restore? Detach and attach? Replication? Log shipping? Determine your SLAs and select a migration strategy that fits the available migration window.
  • Target instance
    • Is the new SQL Server instance going to be out of support soon? Is it as current as either the organization, licensing, or application allows? Is it fully patched?
    • Can the target hardware (and hopefully virtualization layer) handle the features you intend to use? For example, In-Memory OLTP has higher than expected hardware recommendations, and your target hardware needs to live up to these expectations or else it might artificially hold back performance.
  • Performance expectations
    • Do you have ongoing performance baselines and benchmarks from the current SQL Servers to use for a performance comparison of the target environment?
    • Have you stress tested the target platform to determine if it can handle your workloads? Synthetic workload testing is a great place to start, but real-world testing can help you validate or rule out that the target platform will suffice.
  • Code upgrade
    • Have you performed a high level check of the code with the SQL Server Upgrade Advisor? It helps you review the code for anything that it may find that could break as part of the upgrade process that you should fix ahead of the upgrade.
    • What about SSIS packages, SSRS reports, and SSAS process?
    • Do you have any unnecessary garbage inside the database that is being migrated? For example, look for unused indexes, log or temp tables, or anything else that could be cleaned up before you migrate the database.
  • Application changes
    • Read the release notes for the target version of SQL Server. Then read them again.
    • Have you read the ‘breaking changes‘ and ‘behavioral changes‘ sections of Books Online? These documents contain great insight into any behavioral or other expected functionality in the engine that could change as part of the upgrade.
    • Is the application able to use the latest version of the connection libraries? For example, do you need to upgrade the SQL Native Client or ODBC drivers to take advantage of new SQL Server features?
    • Are any items in the new instances changing expected behavior of items such as query execution plans, ETL processes, long-running tasks, etc.? These can be identified well in advance with the test instance and validation processes.
    • Verify that the application functions as normal after updating the database compatibility level to current. Beware the 2014 cardinality estimator improvements, as occasionally I find applications that respond poorly to the changes and should be set to 2012 compatibility to maintain performance.
  • Upgrade process
    • Perform the upgrade process as normal, and validate that things appear as normal.
    • Once completed, perform the following tasks:
      • Check all logs (SQL Server error log, Agent error log, cluster log (if applicable), Windows event logs, and any virtualization logs to ensure that nothing of importance is lurking.
      • Run DBCC CHECKDB WITH DATA_PURITY to help with any database that has been migrated forward since the bronze age by checking for values that are not valid for the table column datatypes.
      • Change the database compatibility level to current (or as high as you can go).
      • Rebuild all user database indexes and statistics. You may need to even go as far as updating statistics WITH FULLSCAN.
      • Execute a DBCC UPDATEUSAGE to correct any borked page and row counts.
      • Take a backup!

In addition to the technicalities of the migration process, step back a bit and look at the entire architecture around the data. Is it time to revisit any areas of the design? Is the HA architecture overly complicated and Availability Groups can help simplify the design and reduce management overhead? Do we look at consolidating databases or instances? Is it virtualized? Is it time to check out Azure SQL DB or an Azure VM for hosting this data in the cloud? What databases exist on these instances that have not been accessed in years?

Take the time to revisit these architectural decisions as part of the upgrade process. It will usually simplify your architecture, reduce the management overhead, improve availability, and increase agility in the datacenter. The business wins and you can sleep more soundly at night!