DBA Morning Checklist with PBM 3.0

At PASS Summit I presented my session Run Your DBA Morning Checklist with Policy-Based Management. I started my session by asking the audience what they wanted to check every morning on their SQL environment.
I noted the list down and compared it to my existing morning checklist. The good thing about the audience list was that I got some new ideas to add to my DBA Morning Checklist. I’m pleased to share my new DBA Morning Checklist 3.0 with the following checks added.

Default Backup Compression

The policy is going to check if the server setting Compress Backup is switched on. Backup compression increases CPU usage, and the additional CPU consumed by the compression process might adversely impact concurrent operations. Before enabling this setting, make sure that the CPU increase has no impact on your production environment. In my experience, I had never had any issue by using backup compression by default. In most cases, the full or differential backups are taken during low business hours, where the server can handle the extra CPU load.

Max degree of parallelism

The policy is going to check the server setting Max Degree of Parallelism. The condition is built according to the Microsoft recommendation

  • For servers that use more than eight processors, use the following configuration: MAXDOP=8.
  • For servers that have eight or less processors, use the following configuration where N equals the number of processors: MAXDOP=0 to N.
  • For servers that have NUMA configured, MAXDOP should not exceed the number of CPUs that are assigned to each NUMA node.
  • For servers that have hyper-threading enabled, the MAXDOP value should not exceed the number of physical processors.

Maximum Server Memory

I’m just going to check if the Maximum Server Memory setting is different than the default setting 2147483647 MB. It’s difficult to check other values because the policy cannot check if there are other instances or other software that need memory.

Optimize for Ad Hoc Workloads

The policy is going to check if the server setting Optimize for Ad Hoc Workloads, is set to True. By default, this setting is set to False.
The optimize for ad hoc workloads option is used to improve the efficiency of the plan cache for workloads that contain many single use ad hoc batches. More information can be found on https://msdn.microsoft.com/en-us/library/cc645587.aspx

I don’t see any downside by enabling this setting.

Outdated Statistics

I’m going to check if every statistic has been updated since the last 7 days.

Remote DAC Connection

This policy is going to check if the server setting Remote DAC Connection is enabled.
The DAC lets an administrator access a running server to execute diagnostic functions or Transact-SQL statements, or to troubleshoot problems on the server, even when the server is locked or running in an abnormal state and not responding to a SQL Server Database Engine connection. More information can be found on https://msdn.microsoft.com/en-us/library/ms190468.aspx I recommend to always enable this setting after installation.

Tables with Fragmented indexes

This policy is verified against every table in every database. The policy will fail if the index has a fragmentation percentage which is higher than 10% and page count > 1000

Trace Flags Enabled

By default, no trace flags are enabled. However, some trace flags might be useful to enable on production servers. I have blogged about the trace flags I enable by default.
In this policy, I’m going to check if all these trace flags are enabled.

Perform volume maintenance task

Perform volume maintenance task is a local group policy that can be set to improve the Instant File Initialization. More information can be found on https://msdn.microsoft.com/en-us/library/ms175935.aspx

However, I was not able to define any policy to check this GPO… I thought it was possible to check this with WMI but didn’t find a solution for it yet. So, if somebody can help me out with this, please let me know 😉

But again, you can download my full DBA Morning Checklist 3.0 from http://aka.ms/dbamorningchecklist Feel free to adjust J

Enjoy!

Pieter

How to perform index maintenance on Azure SQL Database – Reviewed

A few months, I wrote a blog post on how to perform index maintenance on Azure SQL Database.
I thought my solution with Ola Hallengren maintenance scripts worked fine, however, I got some comments on the post and on Twitter that there was an issue with my solution.

After analyzing and retesting it again, it turns out that there was indeed an issue. During the setup, I tried to create the IndexOptimize stored procedure in the master database but that is not allow.
So, I created a management database and created the stored procedures in that DB. Now I’ve noticed that when you do a select * from sys.databases in the user database, it only returns the database itself and the master…
It’s also documented here https://msdn.microsoft.com/en-us/library/ms178534.aspx#Anchor_1

This was an issue because the index optimize stored procedure is also using sys.databases to determine the databases. So, the runbook was working fine, except it was only doing an index optimize in my management DB and nothing more…

I’ve contacted Ola on this and he told me that sys.databases is not the only issue. The problem is also that Azure SQL Database does not support three-part-naming (DatabaseName.SchemaName.ObjectName). That means that it is not possible to have a stored procedure in one database, and do something in another database.

Workaround

After investigating, I’ve found a possible workaround for this issue.
The first step that you have to do is creating the IndexOptimize stored procedure in every user database.
Secondly, I’ve modified my runbook so that the selection of the user databases is not done with the stored procedure but inside the runbook.

I’m first going to connect to the master database to retrieve a list of all the online user databases.
Once I’ve got it, I connect to each of the user databases separately in a loop, and execute the index maintenance procedure.

This solution seems to be working so far. I know it is not (yet) as flexible as Ola’s scripts but it is certainly a good step in the right direction.
You can find the new script on the original post.

Good luck with it and if you have any issues with it, please let me know.

Pieter

How to view Azure SQL Database Audit Logs?

When you enable auditing on you Azure SQL Database, it will capture database events and it will write them to an audit log on your Azure Storage Account.
In fact, the audit is actually an Azure Table which will keep the data according to the retention period that you have specified. More information about auditing on Azure SQL Database can be found here.
The only “issue” with auditing is that you cannot read the Azure Store Table directly from the portal. You’ll have to use different tools to analyze your audit logs.

The Azure Portal Dashboard

The Azure Portal Dashboard gives you an overview and summary of the events that have happened on your database.
You can see the number of schema changes, login failures, data changes, stored procedures, …
This dashboard is only available in the new portal.

 

Azure Storage Explorer

The Azure Storage Explorer is a free tool that you can download from Codeplex which allows you to read the Azure audit log file in detail. With the Storage Explorer, you can see and configure all your storage account components.
Once you have installed the tool, open it and click on add account. Specify the storage account and the access key where the audit table is stored.


When you’re connected, you’ll see a folder Tables, where you can find your audit log. By default, the name of the audit log always has the same structure. It begins with “SQLDBAuditLogs” + “the name of your server” + “the name of your database” + “the date the audit was activated”.

When you click on the table, the audit log will open and you can see all the tracked events that have been logged.
Be aware that the eventtime might be different than the time where you are located. You also have to take into account that it can take a sometime to open the audit log if many events are tracked.
You also have the possibility to download the logs locally. Next to the eventtime you also see the server and database name, the application that executed the query, the type of the event and even the SQL statement or stored procedure that have been executed.
You can really see who did what and when on your database and can be useful to troubleshoot your database.

In the same audit log, you can also see the number of AffectedRows, ResponseRows and how many time it took for the server to execute the query.

Excel and Power Query

A template can be downloaded from the Azure Portal to help you analyze the audit logs. The Excel file allows you to connect to your Storage Account and use Power Query to import the log information.

I won’t explain how you can use the Excel file because Microsoft has already a detailed explanation in the Excel file itself

Power BI

Microsoft also released the Power BI Azure Audit Logs content pack which can help you easily analyze and visualize the wealth of information contained in these logs.
You just need to connect to your storage account through the content pack and a nice dashboard will be created for you automatically.

More information can be found on this MSDN blog post.

If you should have other ways to connect to the audit logs, please do share it with me. I’m always interested in it.

Pieter

4 reasons to subscribe for Speaker Idol at PASS Summit 2015

Last year, Denny Cherry (blog | twitter) organized for the first time a speaker idol at Pass Summit. And guess what, I won the contest last year and I’m really happy that I can present for the first time at the event.

This year, it might be your lucky day because Denny organizes the speaker idol again!

Let me give you 4 good reasons why you should subscribe for the contest and take your shot!

Learn

I’ve learned a lot during the contest. You have to present in front of a jury of “senior” and talented speakers like Tim Ford, Jason Strate, Kalen Lopez, Grant Fritchey, Joey d’Antoni, …
You only get 5 minutes to present so it’s really short. Don’t give an introduction of 2 minutes for example because those 5 minutes are over before you know it. Don’t go over the 5 minutes because you really have to stop. So, even if this a really short presentation, take your time to prepare good slides and rehearse it!
The jury will give you good tips and remarks on what was good and what you could do better. Listen to them and take their advice! If you should go to the next round, make sure that you use that advice.
Also listen to the other candidates and listen to the tips of the jury. You might have some new ideas and make your presentation even better!

You can win!

Last year I submitted 5 sessions for the event but none of them were selected…
I was going to the event anyway, so I thought, why not participate in the speaker idol. I had nothing to lose.
To my surprise, I won the contest. And yes, you can also win! Believe in yourself and give the best you can during 5 minutes.

This year, the prize is again a speaker slot for next year’s event! Imagine that you are the only person that is already 100% sure that you will speak on PASS Summit 2016! It’s a really good feeling believe me J. But on the other hand, it can make you nervous because you have a whole year to think of a good session.

Meet new people

On PASS Summit, you meet new people every day, but on the speakers idol, you meet your fellow contestants.
They are also nervous like you are J and it’s nice to see who those people are and what their experience is. It’s always nice again when you meet some of them again on other SQL Events.
During the finals, there were a lot of people in the room, which was really cool. People were interested in the contest and I expect more audience in the room this year.

It’s fun!

Believe me, it’s really fun to do. Last year, it was my first summit experience and it was all kind of new to me.
You feel the tension in the room, but on the other hand, you laugh a lot J

Well, I hope to see you all in my session and don’t forget “take your shot, you have nothing to lose!”

How to perform index maintenance on Azure SQL Database

It’s been a long time since I wrote a blog post and I finally found some time 😀 I’ve played a lot recently with the new Azure SQL Database V12 version. It’s really cool because it has a lot of improvements if you compare it to v11. With the V12 version, you have almost the same functionalities like an on premise database and it consists also new functionalities like row-level security, dynamic data masking and the query store. Just like an on premise database, you also have to maintain your indexes and statistics because indexes can also get fragmented on an Azure SQL Database. With an on premise server or a VM in Azure, most DBA’s schedule an index maintenance job under the SQL Server Agent. However, with Azure SQL Database, you have to think inside the database, which means you don’t have any server functionalities, including SQL Server Agent. Luckily, the Azure platform provides the Azure Automation, which can be used as a replacement for the SQL Server Agent. In this blog post, I’m going to explain how you can schedule your index maintenance job or any other job that needs to be scheduled.

Create a new Azure Automation Account

First of all, you have to begin with creating a new Azure Automation Account. Go to the portal and select the Automation tab. Click the Create button to create the new Automation Account. Fill in the name of the account and the region. Choose the region that is the closest to you. r

Create an Azure Automation credential asset

As prerequisite, create an Azure Automation credential asset that contains the username and password for the target Azure SQL DB logical server. Click on the Automation Account that you have just created and select Assets. In this section, select the button “Add Setting” at the bottom.

Select the option “Add Credential”


Select the Credential type “Windows PowerShell Credential” because we’re going to use this credential in the PowerShell Workflow. Give the Credential a name.

Specify the username and the password that you will link to the credential. This will be the user that will connect to your SQL Azure Database.

Click on OK and wait until the credential is created.

Install your maintenance scripts

Make sure that you have installed your maintenance scripts/procedures on all your Azure Databases. In my example, I’ve been using the maintenance scripts of Ola Hallengren. For the index maintenance, I have to install the scripts IndexOptimize.sql and CommandExecute.sql. Make sure you download the latest version because Ola fixed an issue with the index optimize on Azure SQL Database on July 19th 2015. There is a small issue with the scripts. Ola uses cross database stored procedures, which is not supported in Azure DB at the moment. So, the @Database parameter will not work correctly. Please also check the comments of this blog post. You have to implement a workaround in the runbook.

Import the Maintenance Runbook

We have now setup all the prerequisites so we can start with creating a runbook in our Azure Automation account. A runbook is a PowerShell Workflow that needs to be created or imported. You can actually compare it to configuring a job step in the SQL Server Agent job. The runbook contains the SQL Scripts that need to be executed for the index maintenance and will be scheduled later on. Select your Automation Account and go to the runbook tab. Click on the button “Import” at the bottom to upload your PowerShell Workflow. Select your PowerShell script that you have created and upload it.

Here is my script that I have used.

<#
.SYNOPSIS
 Perform index maintenance

.DESCRIPTION
 This runbook provides an example of how Azure Automation can be used to accomplish common SQL Agent tasks in the cloud. 
 As prerequisite, please create an Azure Automation credential asset that contains the username and password for the target Azure SQL DB logical server ($SqlServerName).
 Make sure that you have installed the scripts IndexOptimize.sql and CommandExecute.sql of Ola Hallengren (https://ola.hallengren.com/downloads.html)
 The check for the MAXDOP value in the IndexOptimize.sql script is using sys.dm_os_sys_info, which is currently not supported
 So be sure you disable that check otherwise it will return an error.
 
.EXAMPLE
 SQLServerIndexMaintenance
 
.NOTES
 AUTHOR: Pieter Vanhove
 LAST EDIT: October 20, 2015 
#>

workflow SQLServerIndexMaintenance
{
 param
 (
 # Fully-qualified name of the Azure DB server 
 [parameter(Mandatory=$true)] 
 [string] $SqlServerName,
 
 # Credentials for $SqlServerName stored as an Azure Automation credential asset
 # When using in the Azure Automation UI, please enter the name of the credential asset for the "Credential" parameter
 [parameter(Mandatory=$true)] 
 [PSCredential] $Credential
 )
 
 inlinescript{
 
 # Setup credentials 
 $ServerName = $Using:SqlServerName
 $UserId = $Using:Credential.UserName
 $Password = ($Using:Credential).GetNetworkCredential().Password
 
 
 # Create connection to Master DB
 $MasterDatabaseConnection = New-Object System.Data.SqlClient.SqlConnection
 $MasterDatabaseConnection.ConnectionString = "Server = $ServerName; Database = Master; User ID = $UserId; Password = $Password;"
 $MasterDatabaseConnection.Open();
 
 # Create command to query the current size of active databases in $ServerName
 $MasterDatabaseCommand = New-Object System.Data.SqlClient.SqlCommand
 $MasterDatabaseCommand.Connection = $MasterDatabaseConnection
 $MasterDatabaseCommand.CommandText = 
 "
 select name from sys.databases
 where state_desc='ONLINE'
 
 "
 # Execute reader and return tuples of results <database_name, SizeMB>
 $MasterDbResult = $MasterDatabaseCommand.ExecuteReader()
 
 # Proceed if there is at least one database
 if ($MasterDbResult.HasRows)
 {
 # Create connection for each individual database
 $DatabaseConnection = New-Object System.Data.SqlClient.SqlConnection
 $DatabaseCommand = New-Object System.Data.SqlClient.SqlCommand
 
 # Iterate through each database under $ServerName
 while($MasterDbResult.Read())
 {
 $DbName = $MasterDbResult[0]
 
 
 # Apply conditions for user databases (i.e., not master DB)
 if($DbName -ne "Master")
 {
 # Setup connection string for $DbName
 $DatabaseConnection.ConnectionString = "Server=$ServerName; Database=$DbName; User ID=$UserId; Password=$Password;"
 $DatabaseConnection.Open();
 
 # Create command for a specific database $DBName
 $DatabaseCommand.Connection = $DatabaseConnection
 
 Write-Output "Perform index maintenance on $DbName"

 # ExampleTable is a place holder for a table that holds a large volume of less important and expendable data
 # that can be truncated to save space on the database.

 $DatabaseCommand.CommandText ="
 EXECUTE dbo.IndexOptimize
 @Databases = '" + $DbName + "',
 @FragmentationLow = NULL,
 @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
 @FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
 @FragmentationLevel1 = 5,
 @FragmentationLevel2 = 30,
 @UpdateStatistics = 'ALL',
 @OnlyModifiedStatistics = 'Y'
 "
 Write-Output $DatabaseCommand.CommandText
 $NonQueryResult = $DatabaseCommand.ExecuteNonQuery()
 
 # Close connection to $DbName
 $DatabaseConnection.Close()
 
 }
 } 
 
 # Close connection to Master DB
 $MasterDatabaseConnection.Close() 
 } 
 }
}

It’s based on the script that I have found on Codeplex. As you will notice, I have specified 2 parameters. $SqlServerName: this is the server name where we want to perform the index maintenance $Credential: This is the username that will be used to connect to the SQL Server. This will be linked to the credential that we have created in step 2 The workflow is first going to connect to the master database to retrieve all the online databases. As mentioned the install maintenance script part, I had to find a workaround because you cannot use cross DB stored procedures on Azure DB, which are used by the Ola’s IndexOptimize script. Once I get the list of all the databases, I connect to each of the DB’s separatly and execute the index optimize. Please not that the @Databases parameter should contain the current DB name. This solution is not (yet) as flexible as Ola’s solution but it’s a good start. Once the import is done, you will notice that the column Authoring still has a “New” Status. The runbook still needs to be published (see next chapter)

Test your Workflow and Publish

Before you can actually start using this Workflow, you have to publish it, however, I recommend to first test this if everything works fine. Once the Runbook has been created, you can click on it and go the Author section


In the “Draft” section you can see the Workflow that you have just imported. Click on the button “Test” at the bottom to test your Runbook. Before the Runbook is actually executed, you have to provide the 2 parameters. In my example, the Credential will be the AzureDBCredential that I have created in step 2. The name of my SQL Server is called pvhv12.database.secure.windows.net. This is the “SQL Server” where my databases are running on.


If all goes well, you should see an empty output pane with a status: COMPLETED
If not everything goes well, you will notice the errors in the output pane. As soon as you have tested the job and everything works fine, you can publish the runbook and start using it.

Schedule the runbook

The last step is to schedule the published runbook. This is actually the same like in the SQL Server Agent. You have to link a schedule to the job or in this case the runbook. In the runbook, click on the section “Schedule” and select “Link to a New Schedule”
Type in the name of your schedule. Again, the same principle as the name of a job schedule of the SQL Server Agent
Secondly, you need to configure the schedule. Currently you can choose between One time, Hourly or Daily. In my example, I want my Index Maintenance on a daily basis starting at 23:00.

In the final step, you have to specify the runbook parameters that will be used when the job is executed. This will be the same values that I have used in the test phase. Once the schedule is created, your index maintenance will be done on a daily basis.

Verify job history

Just like in the SQL Server Agent, you can also verify the job history of a Runbook. You can select “Jobs” in your runbook to verify the job history. When you click on one of the jobs, you can even see the error output, job summary and the parameters that have been used.

Summary

So summarize I will make a comparison between the Azure Automation and the SQL Server Agent Job.

Azure Automation SQL Server Agent Job
Create an Azure Automation Account Create an SQL Server Agent Job
Create a runbook Create a Job Step
Test the runbook Start Job at Step
Publish the runbook Save the job
Schedule the runbook Schedule the SQL Server Agent Job
View jobs of the runbook View History

 

Piecemeal restore of a partitioned table

A few weeks ago I presented my Advanced Backup and Restore session on SQL Rally in Denmark.
One of the topics that I showed was the piecemeal restore functionality. This allows you to restore your database in “pieces”. The files or filegroups that have been restored are already accessible while the others are still restoring. Jes Borland (blog | twitter) has made a good blog post on this topic.
During the session, I got a question if it was possible to restore filegroups that were used by a partioned table, which should result that the table would be partially available.
My answer to that question was that you had to restore all the files before the table becomes available, but I figured out that my answer was not correct.

With this blog post I want to correct my answer.

Setup the test scenario

For testing purposes, I created a database, MyDatabase, with 5 filegroups: PRIMARY, Partition1, Partition2, Partition3 and Partition4.

Once the database was created I added my partition function and partition scheme.
As you can see, the partition function has 3 boundaries (1, 100, 1000) and the partition scheme is created that specifies the filegroups to hold each one of the four partitions.

CREATE PARTITION FUNCTION myRangePF1 (int)
AS RANGE LEFT FOR VALUES (1, 100, 1000);
GO
CREATE PARTITION SCHEME myRangePS1
AS PARTITION myRangePF1
TO (Partition1, Partition2, Partition3, Partition4);

Next step was creating a partitioned tabled based on the partition scheme

--Create Partitioned table
create TABLE PartitionedTable 
	(PartitionColumnID int not null,
	Name varchar(100) not null,
CONSTRAINT [PK_PartitionColumnID] PRIMARY KEY CLUSTERED 
(
	PartitionColumnID ASC	
) ON [myRangePS1]([PartitionColumnID])	
)ON myRangePS1(PartitionColumnID);

And fill the table with 1000 records

declare @i as int
set @i=1
while @i<=1000
	begin
		insert into PartitionedTable
		select @i,'Name' + CAST(@i as varchar)
		set @i=@i+1
	end

The query below give me the number of records per partition

SELECT $PARTITION.myRangePF1(PartitionColumnID) AS Partition, 
COUNT(*) AS [COUNT] FROM PartitionedTable
GROUP BY $PARTITION.myRangePF1(PartitionColumnID)
ORDER BY Partition ;
GO

Restore the partitioned table

Before I can start with a piecemeal restore, I need to have a backup of course of my database. Don’t forget to change the <BackupPath> in the script.

--Take a new FULL BACKUP
BACKUP DATABASE [MyDatabase]
TO DISK = N'<BackupPath>MyDatabase.bak' WITH COMPRESSION,INIT
GO
BACKUP LOG [MyDatabase] TO DISK=<BackupPath>MyDatabase_Log.trn' WITH FORMAT,INIT,COMPRESSION
GO

Ok, let’s drop the database and begin with the piecemeal restore

USE [master]
GO
ALTER DATABASE [MyDatabase] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
DROP DATABASE [MyDatabase]
GO

First of all, I’m going to restore the PRIMARY filegroup so that my database is already available and “ONLINE” again.

RESTORE DATABASE [MyDatabase] 
FILEGROUP = 'PRIMARY'
FROM DISK = N'<BackupPath>MyDatabase.bak'
WITH PARTIAL, NORECOVERY 
GO
RESTORE LOG [MyDatabase] FROM DISK='<BackupPath>MyDatabase_Log.trn' WITH RECOVERY
GO

When I check the status of my filegroups, you can see that my PRIMARY file is already ONLINE

SELECT [name], [state_desc] 
FROM [MyDatabase].sys.database_files;
GO

Secondly, I restore the Partition1 filegroup. Don’t forget to always apply the transaction log backup.

USE master;
GO
RESTORE DATABASE [MyDatabase] 
FILEGROUP = 'Partition1'
FROM DISK = N'<BackupPath>MyDatabase.bak'
WITH NORECOVERY
GO
RESTORE LOG [MyDatabase] FROM DISK='<BackupPath>MyDatabase_Log.trn' WITH RECOVERY
GO

When I checked the status of my filegroups again, the Partition1 filegroup is now ONLINE

I know that in this filegroup, there is 1 record stored (see “the number of records per partition”), so let’s see if I can query the table

use [MyDatabase]
GO
select PartitionColumnID,Name from PartitionedTable
GO

Well, I get an error that the second partition cannot be accessed, but if you look at the Results, the query returned 1 record!

I restored the Partition2 filegroup

USE master;
GO
RESTORE DATABASE [MyDatabase] 
FILEGROUP = 'Partition2'
FROM DISK = N'<BackupPath>MyDatabase.bak'
WITH NORECOVERY
GO
RESTORE LOG [MyDatabase] FROM DISK='<BackupPath>MyDatabase_Log.trn' WITH RECOVERY
GO

And executed the same query. I got again an error that Partiton3 filegroup was not accessible but the query returned 100 rows. This is correct because I know that there were 100 records in my first 2 partitions of the table.

Of course, if you’re using the correct where-clause and only query the ONLINE filegroups, you will not get any errors at all.

--Try to select the tables
use [MyDatabase]
GO
select PartitionColumnID,Name from PartitionedTable where PartitionColumnID<=100
GO

Finally, I also restored Partition3 and 4 and my table was completely available again.

Conclusion

My answer during my session was not correct. It is possible to restore a partitioned table in pieces and make part of the table available while the other filegroups are still in recovering state.
This can be really helpful to improve your RTO. Instead of restoring the complete table in 1 time, you can restore only the partitions that you really need to get your application back up and running.
The less important partitions, that for example only contain history data, can be restored in a later stage.

Well, as always, you’re never too old to learn something!

Time for a new challenge!

I’m excited to announce that on the 1st of April (not a joke J), I will be joining Microsoft as a Premier Field Engineer SQL Server. After 4,5 amazing years working for Kohera, it was time for a new challenge.
When I started my job as a consultant I knew only a little bit about SQL Server. I started working at a bank in Brussels for 1,5 years where I learned a lot about the daily DBA tasks.
After this “internal” job I really became a consultant because I started working on different shorter projects like consolidations and migrations, SQL audits, performance tunings, teaching customers and many more…
I would like to give a special thanks to Frederik Vandeputte and Koen Jacobs (both managing partners of Kohera) for the chances and opportunities that they gave me. It helped me a lot in improving my SQL skills. Thanks guys for the support and for believing in me!
Another thank you goes to my colleagues. I didn’t see them a lot but when we met, it was always fun to hear their SQL stories.

I’m really looking forward to start working for Microsoft! I’ve waited a long time for this, believe me J.
A PFE delivers Proactive (workshops, health checks, risk assessments) and Reactive (on-site support, troubleshooting) engagements and is a highly skilled engineer with deep technical expertise in a given technology.

More information on Premier Field Role details can be found here or watch the Microsoft Services – Premier Field Engineer Role video.

I still hope to do presentations and events for the SQL Community which I really like doing!

Well… I’ll see you around Europe!

Pieter

 

SQL Server 2012 SP2 CU4 breaks Data Collection

Last week, one of my clients upgraded their SQL Servers to SQL Server 2012 SP2 CU4.
After the upgrade, the internal DBA notified me that the data collection was not showing any Query Statistics anymore in the Management Data Warehouse report.
I verified the logs of the Query Statistics and saw that the Query Statistics Collection ran for a few minutes and then failed with the following errors:

The external columns for ODS – Get snapshot of dm_exec_requests are out of synchronization with the data source columns. The external column “command” needs to be updated.

Truncation may occur due to retrieving data from database column “command” with a length of 32 to data flow column “command” with a length of 16.

The collection never occurred correctly, so the upload to the MDW never happened. That’s the reason why the report was empty.
But why does the collection package starts to fail? Was the DMV dm_exec_requests updated so that the command column size was 32 characters instead of 16? I double checked with BOL and with an older version of SQL Server but the command field always had a length of 32.

Gianluca Sartori (blog | twitter) suggested on Twitter #sqlhelp to modify the SSIS packages to get this fixed. I downloaded the package from the server, modified the command field length to 32 characters and uploaded it again to the server. The QueryActivityCollect package was now running OK but the QuaryActivityUpload package now started to fail with same error on 3 different components (can’t remember which ones). I modified them too and finally I got everything working. … Well… not… Still no data on my report. I’ve noticed that the upload SSIS package was not loading the queries in the [snapshot].[query_stats] table anymore…

I was getting frustrated and stopped with modifying the packages. There must be a better way to fix this! So I came up with the following workaround.

  • I uninstalled the CU4. Yes, you can install SP’s and CU’s. Just go to Programs and Features in the control panel, select the option “View installed updates”, right click on the CU and select uninstall

     

  • Once the CU was uninstalled, my data collection started to work again. I download again the packages from the server and compared them with the CU4 package version.

    And guess what, the packages are changed!

    For example, when I compared the component ODS – Get snapshot of dm_exec_requests

    I get the following in the CU4 version

    While in the CU3 version you get the following:

    So there has definitely be “some” modifications. I’ve also verified the upload package and it’s the same issue.

     

  • I reinstalled CU4 again
  • Stopped the Query Statistics collection

  • I upload the CU3 SSIS package to the server with dtutil, removed all the cache files from the cache folder and restarted the Query Statistics Collection

Finally, my data collection started to work again…
I’ve opened a connect for this. Please vote and let’s hope Microsoft will fix this in the next CU J

Happy collecting!

Pieter

Speaking at SQL Saturday Exeter #372

2 times in a row! Yes, I’m selected again to speak on a very fine SQL Saturday in Exeter!
I had a great time last year and I’m really looking forward to go to the UK again. I’m going by car so if somebody needs a lift, just let me know.
The event will be held on 24th and 25th April 2015 at the Jurys Inn Hotel Exeter, like last year. Really nice hotel, believe me.

This year I’m going to talk about my DBA morning checklist.  In my job as a SQL Consultant, I have my weekly recurring customers. They don’t have a DBA, so I have to perform the daily DBA checklist on a weekly basis 😉 Initially, I’ve started with executing some T-SQL scripts to verify the SQL Servers. But as more servers were installed, my DBA checklist became really time consuming and I didn’t even have the time for a coffee… In this session you will learn how you can perform your DBA morning checklist while sipping coffee. I will demonstrate how you can use Policy Based Management to evaluate your servers and how I configured my setup. By the end of this session, I guarantee you will have more time for your morning coffee!

I have the first timeslot, from 08:00 AM till 08:50, so coffee will be necessary 😉
There are of course a lot of other interesting sessions that day.
On Friday, there will be a lot of full day trainings by William Durkin, David Peter Hansen, Steph Locke, Bob Duffy,  Tobiasz Koprowski, John Martin, André Kamman and Richard Douglas. More information and registration can be found here.

And looks like the pirates are coming this year  :D!

Hope to see you all there in April!

Pieter

Speaking at PASS SQLRally Nordic 2015

PASS SQLRally Nordic is back – bigger than ever! And it looks like they are going to be right! Well, if you look at the agenda, it will definitely be a top event!
There are a lot of international top speakers like Brent Ozar, Grant Fritchey, Itzik Ben-Gan and many more. But also other top industry speakers from across the Nordic region and the world for 3 days of SQL Server and BI training.
The event will be held on March 2-4, 2015, at Tivoli Congress Center in Copenhagen, Denmark. I’ve never been in Copenhagen before. So, I’m really looking forward to visit the city and catch a glimpse of the Little Mermaid.

I’m really happy and proud that I was selected as one of the speakers!
I’m going to talk about Advanced Backup and Restore. Most DBA’s know the standard options for implementing a basic backup and restore plan. But there are also not so well known options that can help you with a more advanced recovery plan and that will increase the speed of your backups and restores. My session is bulk loaded with demos and it will give you a good idea what infrequently features can be helpful in your environment, like partial backup, backup encryption, piecemeal restore … You will also learn how you can improve the performance of your backups! My timeslot is on the third day, Wednesday, March 4, from 12:45am – 01:45pm. The full agenda of the SQL Rally Nordic can be found here.

Well, I hope to see you all in Copenhagen in March!

Pieter