A great day at PASS Summit today.

I started out the day with the keynote presentation with some great info on PASS and the PASS volunteers, followed by Microsoft presenting  on in memory tables, and native compilation for stored procedures, described and demonstrated as “Wicked Fast”.

Next I attended a fabulous presentation from Erin Stellato on Extended Events, an improved replacement for the SQL profiler. I learned a few things that I will use regularly to troubleshoot performance issues.

After that Glenn Berry presented on using DMVs to diagnose performance problems. This was one of my favorite sessions for the day. Glenn did a great job presenting about 60 queries that can be used to track different performance and configuration problems on SQL Server.

During break I then had a chance to talk to the Microsoft SQL Server team to get a better understanding of the execution plans associated with recursive CTEs.  I will have to blog about this one later, but after talking with 4 different people from Microsoft, I have a better understanding of exactly what the execution plan shows for the recursive CTE.

The next session was on using table and row compression to speed up SQL Server. Using compression you can reduce the number of logical reads because the data is compressed and less pages are hit when running a query. This does however increase CPU load, but it looks to me like the small increase in CPU load would be worth it.

At the end of the day during the Exhibitor Reception I had my first book signing on the CTE book at the Joes2Pros booth. We had 30 books to give away, and when the book signing started, there were already 30 people in line to get a book. It was great to talk with people about what they are using CTEs for.

CTE_UK

I also had a chance to meet Pinal Dave and talk with him for a bit.

Another great day at PASS Summit 13.

Query To Simulate CPU Load

Quite often when working testing different performance measures from the Database Health Reports to testing Resource Governor configuration I end up in a situation where I need to emulate a high CPU load on my test SQL Server. In this case it is usually my desktop or a Hyper-V virtual machine, where I want to see how things will work if I had a nearly 100% CPU load. In the real world you would just have to get millions of users to visit your website that has a SQL Server backend, that its not that easy in my development / test lab.

Here is that I came up with for a plan. Keeping in mind the goal here is to write queries that will use up as much CPU as possible. This is counter-intuitive, completely against everything that I practice on a daily basis, but here goes.

First create a table with poor design. Using UNIQUEIDENTIFIERS for a primary key and a foreign key (parent_id) is probably ugly enough.


CREATE TABLE SplitThrash
(
 id UNIQUEIDENTIFIER default newid(),
 parent_id UNIQUEIDENTIFIER default newid(),
 name VARCHAR(50) default cast(newid() as varchar(50))
);

Next we fill the table up with lots and lots of rows, specifically 1,000,000 rows, remember here the goal is to simulate CPU load. If this isn’t enough I often times run this script several times. Keep in mind the GO statement followed by a number says to execute the batch that many times.


SET NOCOUNT ON;
INSERT INTO SplitThrash DEFAULT VALUES;
GO  1000000

Next, this part makes me just feel nasty. Create a CLUSTERED index on the table that we just filled up, and cluster on BOTH columns that were UNIQUEIDENTIFIERS.


CREATE CLUSTERED INDEX [ClusteredSplitThrash] ON [dbo].[SplitThrash]
(
 [id] ASC,
 [parent_id] ASC
);

At this point is is a bit ugly, but it still doesn’t use much memory. You are probably wondering why I called the table split thrash. I gave it this name so that updating the UNIQUEIDENTIFER would cause as many page splits or new page allocations as possible. So we update the parent_id which is part of the clustered index

</pre>
UPDATE SplitThrash
SET parent_id = newid(), id = newid();
GO 100
<pre>

This update statement causes chaos in the page structure for the table as updating the unique identifiers causes quite a bit of processor work.

On my wimpy VM for this development environment I need to repeat this entire process creating 4 or 5 tables, and doing the update in 4 or 5 SSMS windows in order to use up all of the CPU on the database.

Don’t try this on a production system, but it is a great test to run on a development server.

See Also:

If you like this you might also like my article on how to use up all of your memory with a common table expression.

Making Progress on the Server Health Re-write

Originally I implemented the  Database Health Reports as a series of SQL Server Reporting Services reports, which was very handy and easy to create, but hard to distribute in a way that they could be shared and customized to the needs of a project like the Database Health Reports should be.

Here is what I have so far, and I think that I am about a month and a half away from a Beta release, lets call it September 1st for the beta.  If you are interested in the beta, please register on this website, all registered users will be notified when the beta becomes available.

Here are a couple of sneak peak preview screenshots of the database health application.  This application includes reports on Backups, CPU by Query, Connections, Duplicate Indexes, Index Fragmentation, Long Running Queries, Plan Cache, Stored Procedure Performance, Statistics, Unused Indexes and much much more.

Here is a preview of the statistics report, and the Statistics Advisor.

So if you are interested in getting in on the Beta of the new Database Health Reports application, just register on this site and you will be notified when the beta becomes available, about the first of September.