Database Health Monitor Beta 8.1 Released Today

Today Database Health Monitor Beta 8.1 released.  After the beta 8 release about three weeks ago there were a couple small issues discovered that have been addressed in Beta 8.1

Beta 8.1 was release to fix some obsolete code from a very old beta that causes the application to expire on Feb 1st, 2014, this limitation has now been removed.  Beta 8.1 also added some better error handling. In beta 8 there was a crash found in some configurations where minimizing the app caused a crash.  This has now been fixed.

New Charts

 

The biggest change in Beta 8 was the rewrite of many of the charts for an improved look and feel.

Please if you are using Beta 8 or older, please update to Beta 8.1.  Download at http://DatabaseHealth.com/download

Thanks!  Enjoy the product.

-Steve

Speaking at Vancouver DevTeach this week.

This week I will be attending and speaking at Vancouver DevTeach. This event taking place on December 2nd to 4th 2013 at the Vancouver Sheraton Wall Center Hotel. Monday and Tuesday I have morning presentations which leaves the rest of the day to attend a few other sessions.

DevTeach

Here are the sessions that I will be presenting.

Monday: Using FILESTREAM and FILETABLES in SQL Server

Configuring and Using FILESTREAM and FILETABLES in SQL Server.  Developers love to use SQL Server to store files, but this causes headaches for the DBA, finally a reasonable solution for file storage in SQL Server FILETABLES and FILESTREAM. SQL Server 2008 and 2012 add the new features of FILESTREAM and FILETABLES. Learn how to configure and manipulate files in your SQL Server with FILESTREAM, then learn how to do everything that FILESTREAM sounds like it should do with FILETABLES. With FILETABLES inserting is as easy as drag and drop.

Tuesday: SQL Server Performance for Developers

For the .NET programmer, Visual Basic programmer or PHP programmer, if you are accessing a SQL Server database there are some things you should know to performance tune your queries. Learn how to improve query performance with Indexes, how to properly use parameterized queries, using the query analyzer, and avoiding common SQL Server performance pitfalls.

Looking at the lineup of other speakers this looks like it will be a great few days of education. It is nice to attend a conference so close to home, no flights involved for me this time.

Index Overview – Clustered and Nonclustered

This is intended as a brief overview of indexing on SQL Server. Understanding and using indexes can lead to some major performance improvements.

Non-Clustered Indexes

Non-Clustered indexes are often times considered traditional indexing – contains pointers to the data. This is similar to the index at the back of a book. If you open a book to the index, then look up a term you are given the page numbers that the term is on. A non-clustered index is similar to this process, the non-clustered index contains a list of data, then finally it has a link to the actual location for that data similar to the page number in the back of a book.
The Non-Clustered index contain only the data specified in the index, and the primary key information to get to the rest of the data. Non-Clustered indexes do not change the base layout of the tables, they are a copy of the data, and use pointers to get to the data.
Non-Clustered indexes can be created on most data types including char(), varchar(), and uniqueidentifiers. Non-Clustered indexes can improve performance by adding multiple columns.

Clustered Indexes

A clustered index is an index that reorganizes the actual data on disk, causing the entire base table structure to change. Since the clustered index rearranges the base table, only one clustered index is allowed per table.
The term clustering refers to the act of adding a clustered index. Clustering can significantly increase the size of a table and the database if it is not used correctly.
When used correctly clustered indexes can dramatically increase performance.

Covered Indexes

A covered index, is not another type of index, it usually applies to a non-clustered index and the index is considered covering when the index contains everything that a query is looking for without accessing the base table.
Covered indexes can lead to major performance increases since the base table doesn’t need to be accessed.

Index Usage Terminology

Scan: An Index Scan accesses all the rows in the index.
Seek: An Index Seek uses selective rows in the index.

The Seek is much quicker than the scan.
The index scan is usually much quicker than a full table scan.

Additional Index Resources

Temp Table vs Table Variable vs CTE and the use of TEMPDB.

For more information on Common Table Expessions and performance, take a look at my book at Amazon.com: Common Table Expressions Joes 2 Pros®: A CTE Tutorial on Performance, Stored Procedures, Recursion, Nesting and the use of Multiple CTEs

There are many reasons that a Temp Table, Table Variable or Common Table Expression can be a good solution, but there is also a lot of confusion around these.  Much of that confusion is around the use of TempDB.

For the purpose of this article I am going to use my standard Departments table from my CTE presentation. It is just a simple database to work with. Also, this is running on my local SQL Server with no other users connected.


USE [Master];
set statistics io off;

IF EXISTS(SELECT name FROM sys.databases WHERE name = 'cte_demo')
BEGIN
 ALTER DATABASE [cte_demo] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
 DROP DATABASE [cte_demo];
END
CREATE DATABASE [cte_demo];
GO

USE [cte_demo];
-- create a table to use for CTE query demo
CREATE TABLE [Departments] (
 id int, --would normally be an INT IDENTITY
 department VARCHAR (200),
 parent int
 );

-- insert top level departments
insert into [Departments] (id, department, parent) values (1, 'Camping', null);
insert into [Departments] (id, department, parent) values (2, 'Cycle', null);
insert into [Departments] (id, department, parent) values (3, 'Snowsports', null);
insert into [Departments] (id, department, parent) values (4, 'Fitness', null);

-- now some sub-departments for camping
insert into [Departments] (id, department, parent) values (5, 'Tents', 1);
insert into [Departments] (id, department, parent) values (6, 'Backpacks', 1);
insert into [Departments] (id, department, parent) values (7, 'Sleeping Bags', 1);
insert into [Departments] (id, department, parent) values (8, 'Cooking', 1);

-- now some sub-departments for cycle
insert into [Departments] (id, department, parent) values (9, 'Bikes', 2);
insert into [Departments] (id, department, parent) values (10, 'Helmets', 2);
insert into [Departments] (id, department, parent) values (11, 'Locks', 2);

-- now some sub-departments for snowsports
insert into [Departments] (id, department, parent) values (12, 'Ski', 3);
insert into [Departments] (id, department, parent) values (13, 'Snowboard', 3);
insert into [Departments] (id, department, parent) values (14, 'Snowshoe', 3);

-- now some sub-departments for fitness
insert into [Departments] (id, department, parent) values (15, 'Running', 4);
insert into [Departments] (id, department, parent) values (16, 'Swimming', 4);
insert into [Departments] (id, department, parent) values (17, 'Yoga', 4);

Then to compare the difference on the TempDB usage, I am going to use an under-documented function called fn_dblog() which shows you a list of the items that are written to the transaction log in different scenarios.  I will also combine this with the CHECKPOINT command to clear the contents of the log for TempDB.

To try out out, run the following on a TEST or DEVELOPMENT database.  Don’t run this on your production system.


USE TempDB;
GO

SELECT *
 FROM fn_dblog(null, null);

CHECKPOINT;

SELECT *
 FROM fn_dblog(null, null);

Then take a look at the output which will look something like this.

The first time you select from fn_dblog() you may be many rows.  In my example fn_dblog() returned 171 rows, you may have many more.  But after running CHECKPOINT you only get 3 rows, and those are the 3 rows that you will normally get after running CHECKPOINT.

Read more of this post

Database Health Reports – Historic Waits Report Sneak Preview

Coming in the next beta of the Database Health Report is a new feature called the Historic Reports which installs a small database to track database statistics over time.  This has an extremely light load on the overall database performance, and will allow you to track down details of why the database was slow at a specific point in time.

Here is the current rough version of the Historic waits report for one database.  With this report, you can filter down to specific time frames with different granularity options to find a specific point in time where the system was slow, and from there determine what query or queries were causing the system to be slow.

The new historic waits reports should completely replace and make obsolete the existing waits reports that were included in Beta 1 and Beta 2.  What I love about the new historic waits reporting is that this is something that I could use to make my job easier every day.

For now please download Beta 2 of the Database Health Reports or register on this site and you will be informed when Beta 3 of the Database Health Reports is available.   Beta 2 was cool, but Beta 3 is going to be spectacular.  Beta 3 is schedule for about November 15th, so I have about 3 and half weeks to get all the cool new historic waits monitoring features in for the Beta.  Beta 3 will support monitoring of SQL Server 2005, 2008, 2008R2 and 2012.

-Steve Stedman

Bellingham WA

Now that Database Health Beta 2 is out, whats next…

Now that Beta 2 of the Database Health Project is out, and people are using it, I am going to focus my attention on 2 areas.

  1. Fixing any bugs that are reported in Beta 2 of the Database Health Project.
  2. Adding new features, specifically related to offline monitoring or history data.

Currently all of the reports in the database health project report on what SQL Server can tell you right now.  What happens when somebody reports that the server was slow at 4:00pm yesterday, how can you track that down. Well, the plan if all goes well with the next beta of the Database Health Project there will be a way to configure a database on a monitoring system that will check up on your SQL server occasionally and record any problems.  This will allow you to be able to find out what query or what wait type caused the system to be slow at some point when you weren’t watching the database.

This is going to be very cool!

So far I have the core tracking and monitoring features working, now I just need to work on the reporting so that it is quick and easy to understand the data.

For now, please try out Beta 2 of the Database Health Project, it is a free download.

-Steve Stedman