Skip to content

GO

Optimize for Ad Hoc Workloads

The Optimize for Ad Hoc Workloads server configuration can improve performance, and is extremely unlikely to negatively impact performance.
This was a new feature that was introduced in SQL Server 2008, and as with many new features in SQL Server, it is not turned on by default.

What Optimize for Ad Hoc Workloads Changes.

Without the Optimize for Ad Hoc Workloads feature enabled, when a plan is compiled, it is kept around in memory (cached) until something else pushes it out of the cache.

With the Optimize for Ad Hoc Workloads feature enabled, the first time a plan is compiled, it is not kept around in cache, rather a stub is kept around to indicate that they query has previously been complied. The second time the plan is compiled it is kept around in memory. Over time you end up with fewer one time use queries in memory, reducing the memory impact of the plan cache. All other multi-use ad hoc plans will need to be compiled twice on the first and second time that they are used.

What it doesn’t to

The Optimize for Ad Hoc Workloads server configuration doesn’t change the size of the plan cache in SQL Server, it does however prevent the single use ad hoc plans from using up plan cache space, making more space for other cached plans.

Read More »Optimize for Ad Hoc Workloads

What is a Page Split

Tables, and indexes are organized in SQL Server into 8K chunks called pages. If you have rows that are 100 bytes each, you can fit about 80 of those rows into a given page. If you update one of those rows to have more data that contains a variable length field like VARCHAR, NVARCHAR, and others, that will cause the page to overflow forcing a page split. The page split takes about half of the data and moves it into a new page, leaving about half in the original page. Another action that causes a page split is to insert a row that based on the indexing would go into a page that is nearly full, if the inserted row doesn’t fit a page split occurs.

If there is room, and your update or insert doesn’t require a split, this is pretty quick to do since SQL Server is just updating one page and then writing it to disk, and to the transaction log. But if the updated or inserted row doesn’t fit, SQL Server needs to allocate a new page, move about half the rows, and then write both pages to disk and to the transaction log. Additionally the pages in all the indexes that point to the data pages need to be updated. Let’s say your table had 1 clustered index, 4 nonclustered indexes, at a minimum 7 pages would be updated, 1 for the clustered index structure, 4 for the nonclustered indexes, and 2 in the data pages in the clustered index. In this specific example the page split would cause a minimum of 7 times the I/O as an insert or update that didn’t require a page split.

Read More »What is a Page Split

Database Corruption Challenge Week 7 – Alternate Solution

The alternate solution to the Database Corruption Challenge this week was created by Patrick Flynn. This solution is the only solution to successfully recover all the data without using any of the backups. If the challenge had been structured differently and all you had was just the database files and no backups, this solution would work just fine.

As you read through this solution, you will notice the use of Common Table Expressions throughout the solution. Patrick is the first person that I have seen write TSQL code that uses Common Table Expressions more than I do. (I mean that as a compliment Patrick)

To start with let’s add the UDF_Convert_Hex_to_Binary function into the master database to match what Patrick had in his system.

All code provided by Patrick, comments and screenshots by me.

-- Step 0 - Create Supporting function
USE [master]
GO
CREATE FUNCTION [dbo].[UDF_Convert_Hex_to_Binary]
(
      @HEX VARBINARY(MAX)
)
RETURNS VARCHAR(MAX)
BEGIN

	DECLARE @BINARY VARCHAR(MAX)

	;WITH N1 (n) AS (SELECT 1 UNION ALL SELECT 1),
	N2 (n) AS (SELECT 1 FROM N1 AS X, N1 AS Y),
	N3 (n) AS (SELECT 1 FROM N2 AS X, N2 AS Y),
	N4 (n) AS (SELECT ROW_NUMBER() OVER(ORDER BY X.n)
	FROM N3 AS X, N3 AS Y)

	SELECT @BINARY=ISNULL(@BINARY,'')
	+ CONVERT(NVARCHAR(1), (SUBSTRING(@HEX, Nums.n, 1) / 128) % 2)
	+ CONVERT(NVARCHAR(1), (SUBSTRING(@HEX, Nums.n, 1) / 64) % 2)
	+ CONVERT(NVARCHAR(1), (SUBSTRING(@HEX, Nums.n, 1) / 32) % 2)
	+ CONVERT(NVARCHAR(1), (SUBSTRING(@HEX, Nums.n, 1) / 16) % 2)
	+ CONVERT(NVARCHAR(1), (SUBSTRING(@HEX, Nums.n, 1) / 8) % 2)
	+ CONVERT(NVARCHAR(1), (SUBSTRING(@HEX, Nums.n, 1) / 4) % 2)
	+ CONVERT(NVARCHAR(1), (SUBSTRING(@HEX, Nums.n, 1) / 2) % 2)
	+ CONVERT(NVARCHAR(1), SUBSTRING(@HEX, Nums.n, 1) % 2) 

	FROM N4 Nums
	WHERE Nums.n<=LEN(@HEX) 

	RETURN @BINARY
END
GO

Once that function is created, let’s drop the previous CorruptionChallenge7 database, because this is probably not our first time running through the solution. If you have a different way of checking for a database and dropping it, please see my blog post related to this topic earlier today.

Read More »Database Corruption Challenge Week 7 – Alternate Solution

Week 5 – Alternate Solution

Here is how I solved Week 5 of the Database Corruption Challenge. The following steps were tested and confirmed working on SQL Server 2008R2, SQL Server 2012, and SQL Server 2014.

 

To oversimplify, here are the steps:

  • Restore the last known good database.
  • Shut down the database, and copy off the last good database files.
  • Replace some files and restart it. (Hack Attach)
  • Next realizing that the boot page was corrupt page 1:9.
  • Shut down the database.
  • Copy the boot page from the last good database files and place it in the corrupt file.
  • Restart the database.
  • Realize there is other corruption.
  • Fix the other corruption.

Read More »Week 5 – Alternate Solution

Database Corruption Challenge Week 4 Results

It is my pleasure to announce the winners in the Week 4 Database Corruption Challenge. But first how about a rundown of challenge.

In order to make up for the difficult time that Challenge #3 was issued, Challenge #4 was issued at 2:00pm pacific time on a Friday afternoon. Those who had signed up for my Newsletter received early notification of when the corruption challenge would occur, but not details on what the challenge would be. When I launched it, I ran into a couple technical details, it appears that my WordPress site that hosts this blog was having difficulty with attachments larger that 8mb. So I ended up having to upload the file to another site where Database Health Monitor is hosted where the larger file was accepted. This delayed the start of the challenge about 20 minutes.

After the challenge released, it was quiet, almost to quiet. Then about 40 minutes into it, Randolph West messaged me on twitter to say that he was on it, and that he had discovered the CDC (Change Data Capture) tables. He also mentioned that he had dinner plans. He kept working on it, and at 4:55pm (pacific time) he provided me with proof that he had completed the challenge, however it wasn’t until 5:06pm (pacific time), that he provided me the full script showing his solution. At that point Randolph was the winner of the Week 4 Database Corruption Challenge. Two hours and 46 minutes after the challenge was able to be downloaded, Randolph had won the competition.

Read More »Database Corruption Challenge Week 4 Results

Week 4 – Building the Corrupt Database

The following are the steps that I went through to create a more difficult Database Corruption Challenge for Week 4.
Here is how I built the database, nothing fancy, however I did add an extra FILEGROUP called [UserObjects].

CREATE DATABASE [CorruptionChallenge4]
 ON  PRIMARY
( NAME = N'CorruptionChallenge4',
  FILENAME = N'C:\SQL_DATA\CorruptionChallenge4.mdf',
  SIZE = 4288KB ,
  MAXSIZE = UNLIMITED,
  FILEGROWTH = 1024KB ),
FILEGROUP [UserObjects]  DEFAULT
( NAME = N'UserObjects',
  FILENAME = N'C:\SQL_DATA\CorruptionChallenge4_UserObjects.ndf' ,
  SIZE = 4096KB,
  MAXSIZE = UNLIMITED,
  FILEGROWTH = 1024KB )
LOG ON
( NAME = N'CorruptionChallenge4_log',
  FILENAME = N'C:\SQL_DATA\CorruptionChallenge4_log.ldf',
  SIZE = 1088KB,
  MAXSIZE = 2048GB,
  FILEGROWTH = 10%)
COLLATE SQL_Latin1_General_CP1_CS_AS;
GO

Notice, that to make things slightly more challenging, the database is a Case Sensitive Accent Sensitive database. I hope this didn’t cause too much trouble.
Next I created a couple of tables.

Read More »Week 4 – Building the Corrupt Database

Corruption Challenge 1 – An alternative solution

After posting the winning solution for Corruption Challenge 1 from Brent Ozar, I realized that he and I both solved the corruption by using the REPAIR_ALLOW_DATA_LOSS option on CheckDb. A very nasty move, however it did repair the corruption.


DBCC CHECKDB ('',REPAIR_ALLOW_DATA_LOSS);

 

After reading some feedback, one of the winners stated:

As soon as he ran REPAIR_ALLOW_DATA_LOSS, I knew we weren’t on the same page. I just never do that unless I’ve exhausted all the other options.

Which is a good point, in this solution I was fairly certain as to what REPAIR_ALLOW_DATA_LOSS was going to do, however in a real world scenario, who knows what might be effected beyond the initial table that we know about.

There are several other options to clean up the corrupt table besides the REPAIR_ALLOW_DATA_LOSS option. These options still involve copying the data off to another table and finding the missing data from row 31, however how the corruption gets cleaned up varies widely with the following options:

Read More »Corruption Challenge 1 – An alternative solution