Introducing the Joes2Pros Academy

The new Joes2Pros Academy has just launched. The Academy provides an interactive classroom experience that can be accessed on your own time as you need it. This site has high quality content, quizzes and exams. Click play to watch the video below for more information on the Joes2Pros Academy.

Students can interact with other students and instructors to get the best experience out of the Joes2Pros Academy.

Courses offered in the following areas:

  • SQL Server Administration
  • SQL Server Developer
  • SQL Server BI
  • SSRS
  • AlwaysOn
  • C#
  • and more

The Joes2Pros Academy helps you take the journey from average Joe to technology Pro.

As an instructor in the Joes2Pros Academy I have been able to learn a great deal from the other instructors.  I recommend taking a look and at least trying out the free 3 day trial.

Shrinking a Log File on a Test or Development Server

On my computer I run SQL Server 2012, and I use a number of test or development databases. I don’t run backups on these because I don’t care if they get destroyed since they are easy to replace. Recently I upgraded to a SSD, and now my C:\ drive which I am running SQL Server from is really fast, and SQL Server in the development environment is also very fast. The only drawback is the size of the SSD, instead of a 2TB disk, I am now limited to 120GB of fast disk, and I have a second disk with 2 TB.  I ended up with the 120GB drive because I was more focused on performance rather than the overall size.

Yes, this disk is really fast, but suddenly I am need to be more aware of what is using up space because of the limited size. I use the Adventure works for a lot of test scripts, and queries that I might be blogging about, and I noticed that my AdventureWorks log file was 1.5GB or about 1% of my entire disk space, and that is about all waste, on this particular server.

My first thought was to just move the databases over to my slower non-SSD drive, but I want things as fast as possible, so that is not an option.

WARNING: This advice is intended for a development or test database where it doesn’t matter if the data file is lost.  This could be done in production once it was practiced in a test or development environment.  Just be sure you know what you are doing before running it on a production system.

First lets take a look at the size of the AdventureWorks log file.

AdventureWorksLogSize

Its been a while since I set up this database, and I don’t remember how it was originally configured for backups, but first lets set the Recovery Model to be simple, meaning that we don’t need transaction log backups. The Recovery model can be set from the Database Properties dialog on the Options page.

Simple

Next we can check to see how much space is being used, and how much space is available in the AdventureWorks2012_log file. To do that we just run the following query in the AdventureWorks database.

USE AdventureWorks2012;
GO
SELECT (size - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int))/128.0 AS AvailableMB,
 name
FROM sys.database_files;

Which when run produces the following output:
waste_space

Showing that in the log file there is 1.5GB of available space that could be reclaimed. Now on to reclaim it. To reclaim the space we are going to use DBCC SHRINKFILE on the log. When running this be sure to run it only on the log. If DBCC SHRINKFILE is run against the data file, it usually ends up fragmenting all of the indexes and decreasing performance while the space is reclaimed.


USE AdventureWorks2012;
GO
DBCC SHRINKFILE (AdventureWorks2012_Log);
GO

Producing the following output:

shrinkfile_results

After the shrink the size is 63 data pages, 56 used. To see the actual numbers in MB, just run the original query from above to see the following results:

nowast_space

From here we see that the available space on the log file has been cleared up, and if we look at the file sizes on disk, we see the following:

filesystem_after_shrink

Rather than 1.5GB the file is now 504KB or about half a megabyte.

Overall the plan behind this was to free up space on my SSD, and that is what we have done here by shrinking the log file.

Beta 2 of the Database Health Reports Released

Beta 2 of the Database Health Reports has just been released.  The new Beta contains several bug fixes, and several new features.  Here is a breakdown of what is new or changed since the last beta.

New Features

Bug Fixes

  • Performance tuned the Page Reads and Page Writes reports which were having some difficulty on larger databases.
  • Fixed bug with the Plan Cache Report which was only showing data for the master DB originally, now it shows for the appropriate database.
  • Performance tuning on the Table Size Advisor dialog. There is a lot going on with this, it may take some time, but it is faster than before.
  • Performance tuning on the Fragmented Indexes Report.
  • Improved error handling on connection failures such as invalid password or a bad username.
  • Removed backup status for TempDB as this really didn’t make any sense. Why would anyone backup TempDB.

Download the free Beta 2 version and give it a try.

Finding tables that don’t have a clustered index.

Although clustered indexes are not required, and there are many cases where you should not use a clustered index, quite often clustered indexes can be overlooked when creating a table.  This is actually more likely if you are inheriting a database from someone else who didn’t know enough about indexes.  In these cases adding the right clustered index can dramatically increase performance on queries.

As part of the Database Health Reports, I have added the Unclustered Tables report to report on those tables in your database that do not have a clustered index on them.  From that point you can analyze each of those tables and determine if it should be clustered.

Take a look give it a try.   This report will be in Beta 2, which will be available for download on October 1st 2012.

TSQL to Determine Plan Cache Size

In SQL Server 2012, there was a change to the dynamic management view that reports on the plan cache, so the query has changed. Here are the before and after queries to determine the plan cache size in SQL Server 2012, and prior to SQL Server 2012.

SQL 2012 Plan Cache Size

-- How big is the plan cache SQL 2012
select name, sum(pages_kb) /1024.0 MBUsed
from sys.dm_os_memory_clerks
where name = 'SQL PLans'
group by name;

Pre-SQL 2012 Plan Cache Size

-- How big is the plan cache pre SQL 2012
select name, SUM(single_pages_kb + multi_pages_kb)/1024.0 MBUsed
from sys.dm_os_memory_clerks
where name = 'SQL PLans'
group by name;

For more details on the plan cache, one time use queries and queries needing parameterization, see the Database Health Project.

Latest Backups with TSQL

After rebooting a SQL Server, for whatever reason, it is always good to confirm that the system is running good.  There are many things that I check, one of which is to confirm that backups are running.

You can always go to the server browse to the various directories holding backups, and check on the latest files.  This can be difficult for servers with many databases.

Here is my quick check to confirm which backups are running on SQL Server. The query below gives you the 20 most recent backups that have been run.

SELECT TOP 20 database_name,
type,
backup_start_date,
backup_finish_date,
compressed_backup_size,
backup_size
FROM   msdb.dbo.backupset
ORDER  BY backup_set_id DESC;

In order to get the exact results you are looking for it may require you to add a where clause to filter on specific databases, or to expand the number of results in the TOP statement.

I hope that you find this as useful as I do.

You can also check this with charts and additional details using the Database Health Reports project.