On March 18th I will be attending SQL Saturday 610 in Richmond Virginia and the day before I will be presenting a pre conference session covering half a day on Database Corruption and half a day on Disaster Recovery Planning. This event has just been announced, and you can sign up for the pre-con session now. I will be presenting with my business partner and podcast co-host Carlos Chacon.
Emergency Preparation for Database Disasters
First half of the day on Database Corruption Preparation:
We will walk through several corrupt databases exploring ways to go about finding and fixing the corruption. More importantly we will explore how to prevent further data loss at the time corruption occurs. Learn what things you can do to protect yourself when corruption strikes. Learn how to avoid making things worse, and how to protect your data if things do get worse. You will leave with a checklist of steps to use when you encounter corruption. By the end of this session you will be ready to take on corruption, one database at a time.
Second half of the day on Disaster Recovery Planning:
How likely are disaster events to occur? How aware are you of the other risks you face? How prepared are you in the event that a disaster occurs?
In this session we will explore steps that go into building a solid disaster recovery plan, along with building a disaster recovery team to back up the plan.
When I have presented at PASS Summit and SQL Server users groups, I have often times received the feedback that I should expand my database corruption presentation into a pre-con session. Well here it is, a chance to go into much deeper detail that I have been able to do in previous sessions. I will present on several of the things that I have learned throughout my career, and with the database corruption challenge from 2015. The way to prepare for corruption is to practice finding and fixing corruption so that you will be ready when corruption strikes your environment.
For a couple of months we have had requests on the SQL Data Partners Podcast to do an episode on indexing. It is my pleasure to announce that the Indexing episode (episode 81) was released today, well at lease half of it. After we finished the recording, we realized that there was too much content to fit it in to our normal episode length, so we split it into a Part 1 and Part 2 episode. Part 1 launched this week, and Part 2 will come out next week.
The guest this week was Randolph West (Twitter: @BornSql) a SQL Consultant at BornSQL, where we does performance tuning optimization, works with best practices for DR and SQL Server maintenance. Randolph has also recently become a Microsoft MVP. Between Randolph, Carlos and I we were able to cover an extensive amount of detail on indexing.
In Part 1 we covered:
Using the phone book as an example, we chat with our guest Randolph West and explain heaps, non-clustered indexes, and clustered indexes. We also explain best practices to use them when creating new tables. We also cover the issues with using wide data types in your clustered indexes, along with the drawbacks of GUIDs (UniqueIdentifiers) as a clustered index.
We didn’t forget to cover the problems you might run into. For example, you might not think about indexes until your data grows very large. Sure, you can attempt to resolve the problem by moving to SSD’s and increasing memory, but this will push the issue further in the future. In this episode, we will discuss considerations for indexes and why database architecture is so important.
We would like to hear your opinions. What are your strategies when using indexes? Use the hashtag #sqlpodcast and let us know!
We had a lot of fun with this episode. I hope you enjoy it.
“The magic word you use there was fragmentation. This used to be a huge problem as Steve mentioned with spinning drives. While I disagree with the fact that I’m not worrying about fill factor at all is ok. I disagree, you should worry about fragmentation. You should worry about these things purely from the point of views of knowing that your database isn’t in a good condition is well maintained.“
Listen to Learn
Heaps and indexes types explained
Randolph’s thoughts on clustered indexes vs. non-clustered indexes
Similarity between clustered indexes and primary keys
The results of using Management Studio Designer for creating new tables
How to create covering indexes
The negatives of having too many indexes and what you can do about it
How are default fill factors affecting indexes? Is faster storage good reason to avoid fixing the indexes?
Lately I have had the opportunity to work with performance tuning of queries running on the Azure Parallel Data Warehouse (Azure PDW). This has been interesting in that everything you thought you knew about SQL Server DMV’s, writing queries and overall performance tuning is just a little bit different.
My goal was to write a query to show me what is currently active and running on the Azure PDW database.
To start with, I discovered the view called SYS.DM_PDW_EXEC_REQUESTS which contains all kinds of great information to get us started. Specifically it hold information on queries that are currently running or have recently been run or attempted to be run on the Azure PDW database.
Which returned hundreds of rows, and didn’t really get me to where I wanted to go.
Next I added a WHERE statement to filter out those queries that were “done”. In this case done means that were ‘Completed’, ‘Failed’ or ‘Cancelled’.
The chapter meeting will be at the following address. Thanks to Derrick Bovenkamp for providing the conference room for this meeting.
709 West Orchard Drive #4
Bellingham, WA 98225
SQL Server in Azure VM: Best Practices, Latest Features, and Roadmap
Luis Vargas, Senior Program Manager Microsoft
A recorded session from PASS Summit 2016 by Luis Vargas. Session Description: In this session, we’ll discuss the best practices (performance, availability, security, cost) for running SQL Server in Azure VM. We’ll also discuss the latest features, and the future roadmap. Speaker Bio: Luis Vargas is a Senior Program Manager Lead in the SQL Server team. He drives 3 areas: SQL Server on Azure VM, SQL Server Hybrid scenarios, and AlwaysOn.
Luis Vargas is a Senior Program Manager Lead in the SQL Server team. He drives 3 areas: SQL Server on Azure VM, SQL Server Hybrid scenarios, and AlwaysOn.
This weeks episode we are pleased to host Andy Leonard to discuss his thoughts on the catalog and how this feature provides some really interesting benefits for ETL architecture. Although the SSIS Catalog DB is created with management studio, it does not behave like other databases. Our conversation in this episode revolves around the catalog, SSIS packages, and some of the complexities of migrating packages.
“[The SSIS Catalog] is really there to facilitate a lot of Enterprise functionality that is needed in any data integration engine you use.”
One of those services that I offer is on call help for DBA’s who encounter database corruption. Call me anytime 24/7 and as long as you are an existing customer, or we can work out a billing arrangement to turn you into a paying customer, I can help.
That’s all a good if you are familiar with what database corruption is. But based on the occasional phone call that I receive based on my outreach to help with database corruption, it makes me think that I haven’t done the best job describing what corruption is.
A few minutes ago at 12:15am, my phone rings, I was sound asleep but it woke me up and I answered it. Still a bit groggy I hear a voice on the other end asking “Are you the one who can help with database corruption removal?” to which I answer yes. He then goes into a story about how he has just been fingerprinted and he wasn’t supposed to be arrested. He goes on to describe how he didn’t feel that he should have been arrested, but the police officers have told him that now that his fingerprints are in the database, there isn’t anything they can do to undo it. I break in with “sorry that’s not the type of work I do”, and he continues to request my help to to get his fingerprints out of the database and remove his arrest record. I end the call, and hope that this wasn’t his one phone call that he is allowed when he was being booked.
Here is a quick update on the last 3 podcast episodes that we have published, ending up 2016, and starting out 2017 with some great information from Paul Turley, Jimmy May and Argenis Fernandez.
Episode 77: New Reporting Options with Paul Turley
We invited Paul Turley on the show to chat with us about some of the new features in SQL Server reporting services, but were pleasantly surprised to have the conversation take a different direction. Paul talks about some of the new training available for those interested in visualizations and why this community can be hard to define–and connect with. There are so many visualization options and even within Microsoft we have many products that overlap in functionality. In addition to talking about the pick list items of new features, Paul gives some guidance and why you should choose certain products and what makes good use case scenarios for some of the new features.
With the new analytics features now available in SQL Server 2016 via SP1, I think there is going to be additional requirements for data professionals to provide better analytics features and this episode is a good starter for how to frame those conversations.
Episode 76: Availability Group Improvements with Jimmy May
Availability groups provide some exciting features in mixing high availability and disaster recovery; however, from a performance consideration, there are a few drawbacks. With the advances in SQL Server 2016, our guest Jimmy May, set out to test the features and see if they could get some really high performance out of an availability group with two synchronous replicas. In this episode he talks with us about some of his findings and some of the pains associated with getting there.
“I’ve been a big fan of flipping the faster bit ever since even before I was a member of SQLCAT.” Jimmy May
Storage testing and validation is something what we to add under roles and responsibilities as DBAs. Every database we ever manage is going to need one, but how often do we kick the tires? Many times we’re basically told to go verify that array or we’re doing this POC, we’re testing this new storage, but are we really only testing connectivity? In this episode of the podcast, we chat with Argenis Fernandez about how he goes about testing a SAN array and the important metrics you should consider for your storage. If you are still using SQLIO or even Diskspeed to test the IO for your SQL Server, don’t miss today’s episode.
Today I had the opportunity to present on TempDB to the Spokane SQL Server users group (PASS Chapter). The session was titled TempDB – Do This and Don’t Do That”, and it covers a bunch of tips and best practices around what to do and avoid relating to TempDB on your SQL Server.
Here is the download of the presentation and samples.
DECLARE @logInfoResults AS TABLE
[RecoveryUnitId] BIGINT, -- only on SQL Server 2012 and newer
INSERT INTO @logInfoResults
EXEC sp_executesql N'DBCC LOGINFO WITH NO_INFOMSGS';
--SELECT AVG(FileSize) / 1024.0 / 1024.0 as AvgSizeMB, count(*) as VLFs FROM @logInfoResults ;
--SELECT FileSize / 1024 / 1024, [Status] FROM @logInfoResults ;
SELECT * FROM @logInfoResults ;
SELECT ISNULL(cast([Status] as nvarchar(10)), 'Total'),
GROUP BY [Status] WITH ROLLUP;
SQL Data Partners has been remastering some of the older podcast episodes, allowing for a better sound quality than what was originally done. These will be made available on YouTube as they are completed. Here is one of the more recent remastered podcasts.
An interview with Louis Davidson about database design. This one was recorded before I joined the show as the co-host, and was episode #8 originally aired on September 9th, 2015.