Skip to content

ASC

Duplicate Indexes, What a Waste

Today on the tuning minute on the SQL Data Partners Podcast, we discussed duplicate indexes, which lead me to think more about and and write this post.

You know there are many different ways of doing things in SQL Server, and often times you can argue that one way or the other is better, and given the right situation anything might be a good idea. However duplicate indexes are a different story.

When I talk about duplicate indexes, what I mean is 2 or more indexes on the same table that are exactly the same columns.  Something like this:


CREATE NONCLUSTERED INDEX [IX_LastName] ON [dbo].[Customer]
(
 [Lastname] ASC
);

CREATE NONCLUSTERED INDEX [dta123123123_LastName] ON [dbo].[Customer]
(
 [Lastname] ASC
);

Two indexes on exactly the same column. There is nothing to be gained here.

Read More »Duplicate Indexes, What a Waste

Database Corruption Challenge #9 Complete

Database Corruption Challenge #9 has completed with 19 winning participants, you can take a look at the scoring page for exact details. The first to solve the challenge this week was Eduardo Rezende who provided the winning solution just 58 minutes after the challenge began. The second winning solution arrived less than a minute after the first, so it was a very close win for Eduardo. If you know Eduardo (or even if you don’t), take a moment to congratulate him on his win.

Thee was no additional clue provided this time, nobody asked for one and everyone who submitted a solution had the correct answer.

The corruption consisted of several corrupt pages in a nonClustered index, and two different tables with clustered indexes that had corruption. What was interesting about the corruption was that in the two clustered indexes, even though DBCC CheckDB or CheckTable show errors, it was still possible to use SELECT * FROM tablename and get all the results back. Which meant that the corruption could be fixed by selecting everything from the table, deleting the corrupt pages, and then putting the missing data back into the table. The challenge was that one of the tables had many other tables with foreign key constraints. Some solved it by dropping the foreign keys, truncating the table, then reinserting the data and recreating the foreign keys. Others solved it by using DBCC CheckTable with the REPAIR_ALLOW_DATA_LOSS option, then they just put the rows back that were missing.

Read More »Database Corruption Challenge #9 Complete

Week 4 – Building the Corrupt Database

The following are the steps that I went through to create a more difficult Database Corruption Challenge for Week 4.
Here is how I built the database, nothing fancy, however I did add an extra FILEGROUP called [UserObjects].

CREATE DATABASE [CorruptionChallenge4]
 ON  PRIMARY
( NAME = N'CorruptionChallenge4',
  FILENAME = N'C:\SQL_DATA\CorruptionChallenge4.mdf',
  SIZE = 4288KB ,
  MAXSIZE = UNLIMITED,
  FILEGROWTH = 1024KB ),
FILEGROUP [UserObjects]  DEFAULT
( NAME = N'UserObjects',
  FILENAME = N'C:\SQL_DATA\CorruptionChallenge4_UserObjects.ndf' ,
  SIZE = 4096KB,
  MAXSIZE = UNLIMITED,
  FILEGROWTH = 1024KB )
LOG ON
( NAME = N'CorruptionChallenge4_log',
  FILENAME = N'C:\SQL_DATA\CorruptionChallenge4_log.ldf',
  SIZE = 1088KB,
  MAXSIZE = 2048GB,
  FILEGROWTH = 10%)
COLLATE SQL_Latin1_General_CP1_CS_AS;
GO

Notice, that to make things slightly more challenging, the database is a Case Sensitive Accent Sensitive database. I hope this didn’t cause too much trouble.
Next I created a couple of tables.

Read More »Week 4 – Building the Corrupt Database

Corruption Challenge 1 – An alternative solution

After posting the winning solution for Corruption Challenge 1 from Brent Ozar, I realized that he and I both solved the corruption by using the REPAIR_ALLOW_DATA_LOSS option on CheckDb. A very nasty move, however it did repair the corruption.


DBCC CHECKDB ('',REPAIR_ALLOW_DATA_LOSS);

 

After reading some feedback, one of the winners stated:

As soon as he ran REPAIR_ALLOW_DATA_LOSS, I knew we weren’t on the same page. I just never do that unless I’ve exhausted all the other options.

Which is a good point, in this solution I was fairly certain as to what REPAIR_ALLOW_DATA_LOSS was going to do, however in a real world scenario, who knows what might be effected beyond the initial table that we know about.

There are several other options to clean up the corrupt table besides the REPAIR_ALLOW_DATA_LOSS option. These options still involve copying the data off to another table and finding the missing data from row 31, however how the corruption gets cleaned up varies widely with the following options:

Read More »Corruption Challenge 1 – An alternative solution

Corruption Challenge 1 – how I corrupted the database

Since the corruption challenge completed yesterday, I have had several request asking how I created the corrupt database. So here is the script that I used to create the Database Corruption Challenge 1.

First the initial setup. Most of this I stole from a query training session that I did several weeks ago. All I really needed was a table with some data in it.


CREATE DATABASE [CorruptionChallenge1];
GO

USE [CorruptionChallenge1];

CREATE TABLE Revenue
(
[id] INTEGER IDENTITY,
[DepartmentID] INTEGER,
[Revenue] INTEGER,
[Year] INTEGER,
[Notes] VARCHAR(300)
);

INSERT INTO Revenue ([DepartmentID], [Revenue], [Year])
VALUES (1,10030,1998),(2,20000,1998),(3,40000,1998),
 (1,20000,1999),(2,600400,1999),(3,500400,1999),
 (1,40050,2000),(2,400300,2000),(3,604000,2000),
 (1,30000,2001),(2,30000,2001),(3,703000,2001),
 (1,90000,2002),(2,200200,2002),(3,80000,2002),
 (1,10300,2003),(2,1000,2003), (3,900300,2003),
 (1,10000,2004),(2,10000,2004),(3,100300,2004),
 (1,208000,2005),(2,200200,2005),(3,203000,2005),
 (1,40000,2006),(2,30000,2006),(3,300300,2006),
 (1,709000,2007),(2,40000,2007),(3,400300,2007),
 (1,50000,2008),(2,50000,2008),(3,500300,2008),
 (1,20000,2009),(2,600030,2009),(3,600300,2009),
 (1,300700,2010),(2,70000,2010),(3,700300,2010),
 (1,80000,2011),(2,80000,2011),(3,800200,2011),
 (1,100030,2012),(2,90000,2012),(3,900300,2012),
 (1,10000,2013),(2,90000,2013),(3,900100,2013),
 (1,100400,2014),(2,900300,2014),(3,903000,2014),
 (1,102000,2015),(2,902000,2015),(3,902000,2015);

UPDATE Revenue SET [Notes] = CAST(NEWID() as VARCHAR(300)) + 'This is some varchar data just to fil out some pages... data pages are only 8k, therefore the more we fill up in each page, the more pages this table will flow into, thus simulating a larger table for the corruption example';

CREATE CLUSTERED INDEX [clustId] ON [dbo].[Revenue]
(
 [id] ASC
);

CREATE NONCLUSTERED INDEX [ncDeptIdYear] ON [dbo].[Revenue]
(
 [DepartmentID] ASC,
 [Revenue] ASC
);

CREATE NONCLUSTERED INDEX [ncBadNameForAnIndex] ON [dbo].[Revenue]
(
 [Year] ASC
)
INCLUDE ( [Notes]) ;

-- first lets look at the REVENUE table
SELECT *
 FROM Revenue;

Setup1

Read More »Corruption Challenge 1 – how I corrupted the database