SQL Server SEQUENCE
In 2012 SQL Server introduced the SEQUENCE object. Looking at the syntax it is very similar to how Oracle has implemented SEQUENCEs for many years.… Read More »SQL Server SEQUENCE
In 2012 SQL Server introduced the SEQUENCE object. Looking at the syntax it is very similar to how Oracle has implemented SEQUENCEs for many years.… Read More »SQL Server SEQUENCE
Take the following sample code. Four similar tables with an INT IDENTITY, BIGINT IDENTITY, and two with UNIQUEIDENTIFIERS, one using newid() and the other… Read More »Should I Use A Unique Identifier As A Primary Key?
When creating a table you can specify IDENTITY with the following syntax: For example The IDENTITY property is valid for data types of the integer… Read More »Are you wasting half the capacity of IDENTITY?
This confused me for a few minutes today and I wanted to share to help avoid further confusion.
The specific code was this:
DELETE t2 FROM [dbo].[Table1] t1 INNER JOIN [dbo].[Table2] t2 on t1.favColor = t2.id;
Names have been changed to protect the innocent.
In the above delete statement which table will have rows deleted from it?
A: Table1
B: Table2
C: Both Table1 and Table2
D: Neither Table1 and Table2
Tables, and indexes are organized in SQL Server into 8K chunks called pages. If you have rows that are 100 bytes each, you can fit about 80 of those rows into a given page. If you update one of those rows to have more data that contains a variable length field like VARCHAR, NVARCHAR, and others, that will cause the page to overflow forcing a page split. The page split takes about half of the data and moves it into a new page, leaving about half in the original page. Another action that causes a page split is to insert a row that based on the indexing would go into a page that is nearly full, if the inserted row doesn’t fit a page split occurs.
If there is room, and your update or insert doesn’t require a split, this is pretty quick to do since SQL Server is just updating one page and then writing it to disk, and to the transaction log. But if the updated or inserted row doesn’t fit, SQL Server needs to allocate a new page, move about half the rows, and then write both pages to disk and to the transaction log. Additionally the pages in all the indexes that point to the data pages need to be updated. Let’s say your table had 1 clustered index, 4 nonclustered indexes, at a minimum 7 pages would be updated, 1 for the clustered index structure, 4 for the nonclustered indexes, and 2 in the data pages in the clustered index. In this specific example the page split would cause a minimum of 7 times the I/O as an insert or update that didn’t require a page split.
After posting the winning solution for Corruption Challenge 1 from Brent Ozar, I realized that he and I both solved the corruption by using the REPAIR_ALLOW_DATA_LOSS option on CheckDb. A very nasty move, however it did repair the corruption.
DBCC CHECKDB ('',REPAIR_ALLOW_DATA_LOSS);
After reading some feedback, one of the winners stated:
As soon as he ran REPAIR_ALLOW_DATA_LOSS, I knew we weren’t on the same page. I just never do that unless I’ve exhausted all the other options.
Which is a good point, in this solution I was fairly certain as to what REPAIR_ALLOW_DATA_LOSS was going to do, however in a real world scenario, who knows what might be effected beyond the initial table that we know about.
There are several other options to clean up the corrupt table besides the REPAIR_ALLOW_DATA_LOSS option. These options still involve copying the data off to another table and finding the missing data from row 31, however how the corruption gets cleaned up varies widely with the following options:
Read More »Corruption Challenge 1 – An alternative solutionHere is a short video tutorial that shows how to use the COALESCE function in T-SQL on SQL Server. This was originally part of my… Read More »Using the TSQL COALESCE Function
Being day six of the DBCC Command month at SteveStedman.com, today’s featured DBCC Command is DBCC CHECKIDENT. Description: DBCC CHECKIDENT is used for check on… Read More »DBCC CheckIdent
I had so much fun at SQL Saturday in Vancouver BC last year, I have decided to go again. The sessions submitted are: The “Run… Read More »Sessions submitted to SQL Saturday Vancouver BC
After writing a recent blog post on Are you wasting half the capacity of IDENTITY, I started thinking about how much is too big for an… Read More »How BIG is too BIG for a BIGINT IDENTITY?