Today I had the opportunity to complete and launch the February 2017 version of Database Health Monitor.
I hope you like the latest version. Here are the Release Notes:
Version 2.5 Release Notes – February 2017.
Version 2.5 is the February 2017 release of Database Health Monitor, released on February 19, 2017.
- Outstanding app. Already referred it to several friends in the field. Well I do have to say the changes you have made (regarding version 2.5), seems to have made everything in the interface much snappier. Your work here is quite impressive. (Frank from Texas)
- Outstanding! Besides enjoying the layout of your app, one thing I really like about the Quick Scan Report is each entry has a corresponding link that discusses the finding so I can go back and review recommendations, cautions, etc. (Michael)
New Features in 2.5
- Upgraded application to use the .Net 4.5 runtime.
- Fragmented Indexes Advisor – added buttons to rebuild or reorganize all fragmented indexes. Wow this will save some time when you have a bunch of indexes to defragment.
- Added historic monitoring for disk space. Still need to add reports, but the tracking is there.
- Added a menu to the main application window. Removed the big buttons for Connect, Settings, and About, and replaced them with menu items.
- Added a menu item to link to the feedback survey.
- Rearranged some of the columns on the What is Active report to make it easier to see what is running right away without scrolling.
Read more ›
On March 18th I will be attending SQL Saturday 610 in Richmond Virginia and the day before I will be presenting a pre conference session covering half a day on Database Corruption and half a day on Disaster Recovery Planning. This event has just been announced, and you can sign up for the pre-con session now. I will be presenting with my business partner and podcast co-host Carlos Chacon.
Emergency Preparation for Database Disasters
First half of the day on Database Corruption Preparation:
We will walk through several corrupt databases exploring ways to go about finding and fixing the corruption. More importantly we will explore how to prevent further data loss at the time corruption occurs. Learn what things you can do to protect yourself when corruption strikes. Learn how to avoid making things worse, and how to protect your data if things do get worse. You will leave with a checklist of steps to use when you encounter corruption. By the end of this session you will be ready to take on corruption, one database at a time.
Second half of the day on Disaster Recovery Planning:
How likely are disaster events to occur? How aware are you of the other risks you face? How prepared are you in the event that a disaster occurs?
In this session we will explore steps that go into building a solid disaster recovery plan, along with building a disaster recovery team to back up the plan.
When I have presented at PASS Summit and SQL Server users groups, I have often times received the feedback that I should expand my database corruption presentation into a pre-con session. Well here it is, a chance to go into much deeper detail that I have been able to do in previous sessions. I will present on several of the things that I have learned throughout my career, and with the database corruption challenge from 2015. The way to prepare for corruption is to practice finding and fixing corruption so that you will be ready when corruption strikes your environment.
Lately I have had the opportunity to work with performance tuning of queries running on the Azure Parallel Data Warehouse (Azure PDW). This has been interesting in that everything you thought you knew about SQL Server DMV’s, writing queries and overall performance tuning is just a little bit different.
My goal was to write a query to show me what is currently active and running on the Azure PDW database.
To start with, I discovered the view called SYS.DM_PDW_EXEC_REQUESTS which contains all kinds of great information to get us started. Specifically it hold information on queries that are currently running or have recently been run or attempted to be run on the Azure PDW database.
Which returned hundreds of rows, and didn’t really get me to where I wanted to go.
Next I added a WHERE statement to filter out those queries that were “done”. In this case done means that were ‘Completed’, ‘Failed’ or ‘Cancelled’.
Read more ›
Posted in Azure PDW
Tagged with: CAST
On Wednesday January 25th at 6:00pm we will be hosting the monthly meeting of the Bellingham SQL Server Users Group.
- In-Person @ 709 West Orchard Drive #4 Bellingham, Washington
- 18:00 – 20:00 (UTC-08:00) Pacific Time (US & Canada)
- Language: English
|January 25th, 2017 – Bellingham SQL Server Users Group.
For the January meeting of the Bellingham SQL Server users group we will have pizza and watch a recorded session from PASS Summit 2016.
As usual, we will have pizza at the meeting provided by Stedman Solutions, LLC.
The chapter meeting will be at the following address. Thanks to Derrick Bovenkamp for providing the conference room for this meeting.
709 West Orchard Drive #4
Bellingham, WA 98225
SQL Server in Azure VM: Best Practices, Latest Features, and Roadmap
Luis Vargas, Senior Program Manager Microsoft
A recorded session from PASS Summit 2016 by Luis Vargas. Session Description: In this session, we’ll discuss the best practices (performance, availability, security, cost) for running SQL Server in Azure VM. We’ll also discuss the latest features, and the future roadmap. Speaker Bio: Luis Vargas is a Senior Program Manager Lead in the SQL Server team. He drives 3 areas: SQL Server on Azure VM, SQL Server Hybrid scenarios, and AlwaysOn.
Luis Vargas is a Senior Program Manager Lead in the SQL Server team. He drives 3 areas: SQL Server on Azure VM, SQL Server Hybrid scenarios, and AlwaysOn.
This weeks episode we are pleased to host Andy Leonard to discuss his thoughts on the catalog and how this feature provides some really interesting benefits for ETL architecture. Although the SSIS Catalog DB is created with management studio, it does not behave like other databases. Our conversation in this episode revolves around the catalog, SSIS packages, and some of the complexities of migrating packages.
“[The SSIS Catalog] is really there to facilitate a lot of Enterprise functionality that is needed in any data integration engine you use.”
Listen to Learn
- The benefits of project deployments
- Some of the issues migrating packages
- Why restoring the catalogdb can be problematic
- The various levels or components of a package
- How you can view reports without going into SSMS
One of those services that I offer is on call help for DBA’s who encounter database corruption. Call me anytime 24/7 and as long as you are an existing customer, or we can work out a billing arrangement to turn you into a paying customer, I can help.
That’s all a good if you are familiar with what database corruption is. But based on the occasional phone call that I receive based on my outreach to help with database corruption, it makes me think that I haven’t done the best job describing what corruption is.
A few minutes ago at 12:15am, my phone rings, I was sound asleep but it woke me up and I answered it. Still a bit groggy I hear a voice on the other end asking “Are you the one who can help with database corruption removal?” to which I answer yes. He then goes into a story about how he has just been fingerprinted and he wasn’t supposed to be arrested. He goes on to describe how he didn’t feel that he should have been arrested, but the police officers have told him that now that his fingerprints are in the database, there isn’t anything they can do to undo it. I break in with “sorry that’s not the type of work I do”, and he continues to request my help to to get his fingerprints out of the database and remove his arrest record. I end the call, and hope that this wasn’t his one phone call that he is allowed when he was being booked.
Read more ›
Posted in Corruption
Tagged with: DBA
, SQL Server
Here is a quick update on the last 3 podcast episodes that we have published, ending up 2016, and starting out 2017 with some great information from Paul Turley, Jimmy May and Argenis Fernandez.
Episode 77: New Reporting Options with Paul Turley
We invited Paul Turley on the show to chat with us about some of the new features in SQL Server reporting services, but were pleasantly surprised to have the conversation take a different direction. Paul talks about some of the new training available for those interested in visualizations and why this community can be hard to define–and connect with. There are so many visualization options and even within Microsoft we have many products that overlap in functionality. In addition to talking about the pick list items of new features, Paul gives some guidance and why you should choose certain products and what makes good use case scenarios for some of the new features.
With the new analytics features now available in SQL Server 2016 via SP1, I think there is going to be additional requirements for data professionals to provide better analytics features and this episode is a good starter for how to frame those conversations.
Link to Show: http://sqldatapartners.com/2017/01/04/reporting-improvements/
Episode 76: Availability Group Improvements with Jimmy May
Availability groups provide some exciting features in mixing high availability and disaster recovery; however, from a performance consideration, there are a few drawbacks. With the advances in SQL Server 2016, our guest Jimmy May, set out to test the features and see if they could get some really high performance out of an availability group with two synchronous replicas. In this episode he talks with us about some of his findings and some of the pains associated with getting there.
“I’ve been a big fan of flipping the faster bit ever since even before I was a member of SQLCAT.” Jimmy May
Episode Link: http://sqldatapartners.com/Jimmy
Episode 75 Storage Testing with Argenis Fernandez
Storage testing and validation is something what we to add under roles and responsibilities as DBAs. Every database we ever manage is going to need one, but how often do we kick the tires? Many times we’re basically told to go verify that array or we’re doing this POC, we’re testing this new storage, but are we really only testing connectivity? In this episode of the podcast, we chat with Argenis Fernandez about how he goes about testing a SAN array and the important metrics you should consider for your storage. If you are still using SQLIO or even Diskspeed to test the IO for your SQL Server, don’t miss today’s episode.
Episode Link: http://sqldatapartners.com/reporting