Hardware vs. tuning: Getting the most from SQL Server

- select the contributor at the end of the page -

Dealing with a server that seems drastically under-powered can cause frustration all around. Thankfully, tuning the indexes can result in fantastic gains in performance.

As part of what I do, I see so many different configurations of SQL Server in use by customers. In the past few weeks alone, I’ve seen versions of SQL Server from 2000 to the latest and greatest, 2014. Customers have different requirements and expectations for each instance of SQL Server that they have running, and it keeps my job interesting. Most recently I helped with two different servers that I thought were severely under-powered—even I was surprised at how a little tuning could make such a big difference in performance. Here’s a quick breakdown of what I did, and how you can do it too.

Server1


Admittedly, I was surprised to discover that Server1 was running SQL Server 2005 Standard Edition on 32 bit Windows Server 2003. SQL Server is memory intensive, and this configuration only supports 2 GB of RAM.

Migrating to new hardware or a virtual machine and upgrading SQL Server, the OS, and the applications will not be accomplished quickly, but are in the long term plans. This customer was hoping that I could find a way to eliminate frequent timeouts experienced by the main application. After a few configuration changes, the customer asked me to focus on queries that either timed out or took over a minute to run. I could have suggested index changes because they could be quickly rolled out, but any code changes would have to wait, so I didn’t suggest those.

By recommending just a handful of index changes to eliminate clustered index scans and key lookups in the top 15 queries, the application performed much better. I also found that just by changing one index, two or three different queries could be improved. The frequency of timeouts drastically decreased, and the customer is very happy with what we accomplished.

Server2


The main database on Server2 is over a terabyte in size and grows as new data is imported each month. This database is not in use every day, but when it is, it executes a batch process that performs calculations over hundreds of thousands of rows, taking minutes to complete. The main table has over six billion rows. Updates and deletes must be done cautiously to avoid locking up the server.

In today’s database world, a terabyte database is not unusual. What is unusual, however, is that the server (running in a virtual machine) was only assigned 16 GB.

I reviewed the server and pointed out the most common settings changes, but I spent most of the day talking to the CIO and software architect about indexes, partitioning and compression. By making a change to just one index, they were able to decrease the time for the batch process from three minutes to about 10 seconds.

We also looked at user-defined functions. The user-defined function feature is appealing to developers, but is often the source of hidden performance problems. In this case, the code was calling the same function which contained a database call with the same parameters thousands of times. By using Profiler, I showed them that SQL Server wasn’t saving the value returned, it was determining the value over and over. There were several layers of UDFs, but we experimented by eliminating just one—and just like that, we saw a 25 percent improvement.

Just a couple of days later, the customer decided to increase the amount of RAM to 48 GB. They weren’t sure that increasing the memory would actually be that beneficial, since every run of the process would access different data…so cached pages wouldn’t really help. However, I noticed that the Buffer Cache Hit Ratio was decreasing to 20 percent and the Page Life Expectancy was bottoming out whenever the process ran on that first day; a surefire sign of memory pressure. Maybe they’re correct; 48 GB may not be much different than 16 GB in this situation.

Partitioning and compression could likely help here. This server is not on Enterprise Edition, but we talked about setting up a prototype on Developer Edition to test my theory. Compression would decrease the storage requirements and would improve performance on this read-mostly workload. Partitioning would improve loading and deleting data and provide a boost in query performance as well. These two features work well together, since partitions can be individually compressed at different levels depending on the frequency of updates.

Takeaway


If you’re dealing a server that seems wildly under-powered, try tuning the indexes before taking more drastic measures. You just might be as surprised as I was to find out what a huge difference it can make.

Get our content first. In your inbox.

Loading form...

If this message remains, it may be due to cookies being disabled or to an ad blocker.

Contributor

Kathi Kellenberger

is a teammate with Linchpin People and SQL Server MVP. She loves writing, presenting, and teaching SQL Server topics, especially T-SQL. Kathi has been an author, co-author, or technical editor for a dozen books and recently completed recording her first Pluralsight course. Follow Kathi on Twitter @auntkathi.