Slow CheckDB – What do you do?

Download PDF

There are many times that CheckDB ends up being extremely slow, sometimes taking more than a day to run. This can make life difficult if you are trying to find out what is corrupt.

DBCC CheckDB(MyBigDatabase) WITH NO_INFOMSGS;

There are several of the tricks that I use to speed up DBCC CheckDB, depending on the specific environment. What I am looking for is what others do when they need to run DBCC CheckDB on a big database that appears to take forever to complete?

All answers posted by the end of Database Corruption Challenge 6 will earn an extra point in the Database Corruption Challenge. As part of the Database Corruption Challenge it is not feasible for me to create a 3TB database file and expect anyone to want to download it for the challenge. I would love to do that, however I don’t think anyone would participate in the Corruption Challenge if you had to download a 3TB file.

If you need details on finding out what table CheckDB is stuck on (or currently processing) you can take a look at my Status of DBCC Check DB blog post. Included in the post are queries to find the table currently being checked by DBCC CheckDB, which is particularly useful when you have a slow CheckDB process running.

So, what have you done speed up DBCC CheckDB, and why?

Related Links

Posted in Corruption Tagged with:
14 comments on “Slow CheckDB – What do you do?
  1. The time spent to run database consistency check really depends on what you actually want to check for.
    I usually run as:
    – run DBCC CHECKDB with physical_only (weekly basis)
    – run DBCC CHECKCATALOG (daily basis)
    – run DBCC CHECKALLOC (daily basis)
    – run DBCC CHECKFILEGROUP / DBCC CHECKTABLE (daily basis)(could be run on multiple sessions, for distinct FG/tables)

    I prefer running DBCC CHECKTABLE(WITH DATA_PURITY, on chunks of tables.
    Split all databases tables in 7 chunks, distributed based on size/number and run a chunk on a daily basis.

  2. Rob Farley says:

    If DBCC is unexpectedly slow, then troubleshooting the server for disk pressure can be useful, and looking for other changes conditions. A bunch of autogrow events in the default trace, or larger than normal backup files could point to a database which is legitimately larger than before.

    For general slowness, it’s worth remembering that it a database has been backed up and restored, and this restore doesn’t have any DBCC errors, then the original one can’t either. This can reduce the impact of DBCC being slow, which can be significant. If no one cares that it takes a long time, then it might not matter.

    if there have been errors, then perhaps running DBCC isn’t actually required. Checking in msdb’s suspect_pages table can be useful to see where errors have been, if there’s no recent DBCC call.

    And then there’s the exercise of simply querying each table. If you can get to your data, you can avoid data loss. It’s no substitute really, but can reassure customers that they haven’t lost anything.

  3. Patrick Flynn says:

    A lot of the options for making DBCC CHECKDB faster where covered in the POASS 2014 presentation
    DBA-336 Making DBCC CHECKDB Go Faster by Argenis Fernandez

    Some of the options not mentioned above (other than running on a high powered Server with fast Solid State Disks) include:

    Trace flags 2549 and 2562 – KB: http://support.microsoft.com/kb/2634571
    Disabling/Dropping Certain Non-Clustered Indexes on computed columns and Filtered on Sparse Column
    Limiting memory available to CheckDB (https://www.sqlskills.com/blogs/jonathan/dbcc-checkdb-execution-memory-grants-not-quite-what-you-expect/)
    http://sqlblog.com/blogs/argenis_fernandez/archive/2013/06/16/dbcc-checkdb-on-vvldb-and-latches-or-my-pain-is-your-gain.aspx

    If you have access to the 2014 recordings then it is worth checking this out.

  4. Pragee Surheley says:

    As lot of options already mentioned above including trace flag 2549 & 2562 I want to mention following points:-

    How long will DBCC CHECKDB take to run depends on may factor:-
    1. Size of the database
    2. Concurrent I/O load on the server.
    3. Concurrent CPU activity on the database.
    4. Concurrent update activity on the database.
    5. Throughput capabilities of the I/O subsystem.
    6. No. of CPUs on the server.
    7. Speed of the disk where tempdb is palced.
    8. Number & type of corruption that exist.

    To make DBCC CHECKDB go faster we can use:-
    1. Consider of reducing degree of parallelism (DOP) to lower run time when using the option WITH PHYSICAL_ONLY.
    2. Consider reducing the memory available to DBCC CHECKDB because by default query processor will grant DBCC CHECKDB a very large query execution memory grant potentially causing buffer pool shrinkage.
    3. Indexes on computed columns drastically slow down DBCC CHECKDB. Solution is to disable such indexes while DBCC CHECKDB is running.

    These are the points which I think we should consider while running DBCC CHECKDB

  5. Prageet Surheley says:

    As lot of options already mentioned above including trace flag 2549 & 2562 I want to mention following points:-

    How long will DBCC CHECKDB take to run depends on may factor:-
    1. Size of the database
    2. Concurrent I/O load on the server.
    3. Concurrent CPU activity on the database.
    4. Concurrent update activity on the database.
    5. Throughput capabilities of the I/O subsystem.
    6. No. of CPUs on the server.
    7. Speed of the disk where tempdb is palced.
    8. Number & type of corruption that exist.

    To make DBCC CHECKDB go faster we can use:-
    1. Consider of reducing degree of parallelism (DOP) to lower run time when using the option WITH PHYSICAL_ONLY.
    2. Consider reducing the memory available to DBCC CHECKDB because by default query processor will grant DBCC CHECKDB a very large query execution memory grant potentially causing buffer pool shrinkage.
    3. Indexes on computed columns drastically slow down DBCC CHECKDB. Solution is to disable such indexes while DBCC CHECKDB is running.

    These are the points which I think we should consider while running DBCC CHECKDB

  6. Neil Abrahams says:

    Checking the version of SQL Server to ensure your not suffering from any bugs can provide performance gains…

    https://support.microsoft.com/en-us/kb/3029825/

  7. Subhro Saha says:

    We can try to disable Computed Column indexes, Try DBCC with PHYSICAL_ONLY

  8. All good points above. A note that CHECKALLOC is already included in the CHECKDB and CHECKFILEGROUP operations, so if you’re running those on a VLDB, it isn’t necessary to run CHECKALLOC. Also, CHECKALLOC does not scan FILESTREAM data, so that’s something to keep in mind.

    Slightly tangential: sometimes you may get more than 1000 errors, which makes SQL Server Management Studio unhelpful in collecting all the messages. A good tip is to use the command line to run DBCC CHECKDB and output the results to a file.

    Something along the lines of sqlcmd -E -Q”DBCC CHECKALLOC (CorruptDB) WITH ALL_ERRORMSGS, NO_INFOMSGS” -oC:\Temp\checkdbresults.txt

  9. As mentioned above, optimizing the tempdb setup might help, since DBCC CHECKDB would utilize tempdb. We can try to put the tempdb on separate and fast disk. Also size the tempdb accordingly (initial and autogrowth setting for both tempdb data and log files). You might also want to run the DBCC CHECKDB when there is low activity on the server and/or database. Low activity on the server can help reduce contention on the tempdb.

    Running the DBCC CHECKDB on a very active large database, you might encounter some errors (errors 1450, 1452 or 665) in regards to insufficient system resource. This is because the DBCC CHECKDB creates snapshot (sparse file), and seems that there is limitation on sparse file size. To avoid it, we want to run the DBCC CHECKDB during the time when the database is not heavily hit with activities. Alternatively, you wan to offload it to another server (restore the database on another system and run DBCC CHECKDB against it).

  10. Andre Kamman says:

    Many times I see the scenario where a backup of the production database is restored on a separate machine every night for reporting purposes. They then decide to not run DBCC checks on the production machine at all anymore but instead run it on the nightly copy. This is firmly on my list of DON’Ts, you can do a part of it but still have to run at least the physical checks on the primary production database.
    This post of Aaron Bertrand was mentioned before : http://sqlperformance.com/2012/11/io-subsystem/minimize-impact-of-checkdb
    It’s a very good summary and contains many useful links.
    I would like to suggest that anyone who would like to offload their checkdb to another machine to read the part called “Offload logical checks where possible” in Aaron’s post

  11. paSQuaLe ceglie says:

    Improvements for the DBCC CHECKDB command may result in faster performance when you use the PHYSICAL_ONLY option.

    https://support.microsoft.com/en-us/kb/2634571

  12. Jon Gurgul says:

    I would look to split the checks, optimise as best as possible with regards to tempdb and io patterns.

    Also I would look to archive data that is not needed from the live system. This may mean removing it completely from the database, or putting into read only file groups. My point being there is no need to continually check for corruption in static data, that we can just as easily restore from a backup.

  13. AS many other’s have pointed my scripts run DBCC CHECKDB WITH PHYSICAL_ONLY for databases bigger than 500GB but once a month they run completely.

    Another possibility to speed it up is using FILESTREAM (when suitable, of course) as DBCC CHECKDB will verify the existence of the file and not its content, which will result in faster checks.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

Time limit is exhausted. Please reload CAPTCHA.