Week 2 of the Database Corruption Challenge

Welcome to week 2 of the database corruption challenge.

Hi, I am Steve Stedman and I have created this challenge to help the best and brightest SQL Server DBAs on the planet test their skills and to show expertise to the world.

The winner last week was Brent Ozar, I would just like to congratulate him again on winning the first Database Corruption Challenge (DBCC). Nice job Brent.

On to the week 2 challenge:

Here is it is.

Challenge2_Corruption1

Click here to find out about the week 2 database corruption challenge.

 

Enjoy the challenge. I look forward to seeing the solutions.

-Steve Stedman

corruption_meme

Next Corruption Challenge – To Be Announced Soon

The next Corruption Challenge will be announced soon. The Corruption Challenge from last week was a hit, and I am going to repeat it again this weekend.

corruption

If you want to find out about the next corruption challenge, stick around, or register for my newsletter to be informed of when the challenge will begin.

Related Links:

Difference Between TRUNCATE TABLE and DELETE FROM Table

Most DBAs know the usual difference between TRUNCATE and DELETE FROM, however one not so obvious difference between the two is how things are handled if the table is corrupt.

For instance, if you look at the corrupt database from the Database Corruption Challenge Week 1, you see the following

DeleteVsTruncate1

If you want to clear out the corrupt Revenue table, one way to attempt to do it is with a DELETE statement like this.

Read more of this post

Corruption Challenge 1 – An alternative solution

After posting the winning solution for Corruption Challenge 1 from Brent Ozar, I realized that he and I both solved the corruption by using the REPAIR_ALLOW_DATA_LOSS option on CheckDb. A very nasty move, however it did repair the corruption.


DBCC CHECKDB ('',REPAIR_ALLOW_DATA_LOSS);

 

After reading some feedback, specifically a comment from Rob Farley, one of the winners, he stated the following:

As soon as he ran REPAIR_ALLOW_DATA_LOSS, I knew we weren’t on the same page. I just never do that unless I’ve exhausted all the other options.

Which is a good point, in this solution I was fairly certain as to what REPAIR_ALLOW_DATA_LOSS was going to do, however in a real world scenario, who knows what might be effected beyond the initial table that we know about.

There are several other options to clean up the corrupt table besides the REPAIR_ALLOW_DATA_LOSS option. These options still involve copying the data off to another table and finding the missing data from row 31, however how the corruption gets cleaned up varies widely with the following options:

Read more of this post

Another Corruption Challenge This Weekend.

As the weekend approaches and we head for another Database Corruption Challenge (DBCC) I want to remind everyone who is interested of being notified of the next event to sign up for my newsletter.

Just click here to find out more about the newsletter.

newsletter

Sometime Friday I will announce the latest Corruption Challenge, those on the mailing list will be the first to know when it is available.

Related Links:

Corruption Challenge 1 – how I corrupted the database

Since the corruption challenge completed yesterday, I have had several request asking how I created the corrupt database. So here is the script that I used to create the Database Corruption Challenge 1.

First the initial setup. Most of this I stole from a query training session that I did several weeks ago. All I really needed was a table with some data in it.


CREATE DATABASE [CorruptionChallenge1];
GO

USE [CorruptionChallenge1];

CREATE TABLE Revenue
(
[id] INTEGER IDENTITY,
[DepartmentID] INTEGER,
[Revenue] INTEGER,
[Year] INTEGER,
[Notes] VARCHAR(300)
);

INSERT INTO Revenue ([DepartmentID], [Revenue], [Year])
VALUES (1,10030,1998),(2,20000,1998),(3,40000,1998),
 (1,20000,1999),(2,600400,1999),(3,500400,1999),
 (1,40050,2000),(2,400300,2000),(3,604000,2000),
 (1,30000,2001),(2,30000,2001),(3,703000,2001),
 (1,90000,2002),(2,200200,2002),(3,80000,2002),
 (1,10300,2003),(2,1000,2003), (3,900300,2003),
 (1,10000,2004),(2,10000,2004),(3,100300,2004),
 (1,208000,2005),(2,200200,2005),(3,203000,2005),
 (1,40000,2006),(2,30000,2006),(3,300300,2006),
 (1,709000,2007),(2,40000,2007),(3,400300,2007),
 (1,50000,2008),(2,50000,2008),(3,500300,2008),
 (1,20000,2009),(2,600030,2009),(3,600300,2009),
 (1,300700,2010),(2,70000,2010),(3,700300,2010),
 (1,80000,2011),(2,80000,2011),(3,800200,2011),
 (1,100030,2012),(2,90000,2012),(3,900300,2012),
 (1,10000,2013),(2,90000,2013),(3,900100,2013),
 (1,100400,2014),(2,900300,2014),(3,903000,2014),
 (1,102000,2015),(2,902000,2015),(3,902000,2015);

UPDATE Revenue SET [Notes] = CAST(NEWID() as VARCHAR(300)) + 'This is some varchar data just to fil out some pages... data pages are only 8k, therefore the more we fill up in each page, the more pages this table will flow into, thus simulating a larger table for the corruption example';

CREATE CLUSTERED INDEX [clustId] ON [dbo].[Revenue]
(
 [id] ASC
);

CREATE NONCLUSTERED INDEX [ncDeptIdYear] ON [dbo].[Revenue]
(
 [DepartmentID] ASC,
 [Revenue] ASC
);

CREATE NONCLUSTERED INDEX [ncBadNameForAnIndex] ON [dbo].[Revenue]
(
 [Year] ASC
)
INCLUDE ( [Notes]) ;

-- first lets look at the REVENUE table
SELECT *
 FROM Revenue;

Setup1

Read more of this post