DBA Morning Checklist with PBM 3.0

At PASS Summit I presented my session Run Your DBA Morning Checklist with Policy-Based Management. I started my session by asking the audience what they wanted to check every morning on their SQL environment.
I noted the list down and compared it to my existing morning checklist. The good thing about the audience list was that I got some new ideas to add to my DBA Morning Checklist. I’m pleased to share my new DBA Morning Checklist 3.0 with the following checks added.

Default Backup Compression

The policy is going to check if the server setting Compress Backup is switched on. Backup compression increases CPU usage, and the additional CPU consumed by the compression process might adversely impact concurrent operations. Before enabling this setting, make sure that the CPU increase has no impact on your production environment. In my experience, I had never had any issue by using backup compression by default. In most cases, the full or differential backups are taken during low business hours, where the server can handle the extra CPU load.

Max degree of parallelism

The policy is going to check the server setting Max Degree of Parallelism. The condition is built according to the Microsoft recommendation

  • For servers that use more than eight processors, use the following configuration: MAXDOP=8.
  • For servers that have eight or less processors, use the following configuration where N equals the number of processors: MAXDOP=0 to N.
  • For servers that have NUMA configured, MAXDOP should not exceed the number of CPUs that are assigned to each NUMA node.
  • For servers that have hyper-threading enabled, the MAXDOP value should not exceed the number of physical processors.

Maximum Server Memory

I’m just going to check if the Maximum Server Memory setting is different than the default setting 2147483647 MB. It’s difficult to check other values because the policy cannot check if there are other instances or other software that need memory.

Optimize for Ad Hoc Workloads

The policy is going to check if the server setting Optimize for Ad Hoc Workloads, is set to True. By default, this setting is set to False.
The optimize for ad hoc workloads option is used to improve the efficiency of the plan cache for workloads that contain many single use ad hoc batches. More information can be found on https://msdn.microsoft.com/en-us/library/cc645587.aspx

I don’t see any downside by enabling this setting.

Outdated Statistics

I’m going to check if every statistic has been updated since the last 7 days.

Remote DAC Connection

This policy is going to check if the server setting Remote DAC Connection is enabled.
The DAC lets an administrator access a running server to execute diagnostic functions or Transact-SQL statements, or to troubleshoot problems on the server, even when the server is locked or running in an abnormal state and not responding to a SQL Server Database Engine connection. More information can be found on https://msdn.microsoft.com/en-us/library/ms190468.aspx I recommend to always enable this setting after installation.

Tables with Fragmented indexes

This policy is verified against every table in every database. The policy will fail if the index has a fragmentation percentage which is higher than 10% and page count > 1000

Trace Flags Enabled

By default, no trace flags are enabled. However, some trace flags might be useful to enable on production servers. I have blogged about the trace flags I enable by default.
In this policy, I’m going to check if all these trace flags are enabled.

Perform volume maintenance task

Perform volume maintenance task is a local group policy that can be set to improve the Instant File Initialization. More information can be found on https://msdn.microsoft.com/en-us/library/ms175935.aspx

However, I was not able to define any policy to check this GPO… I thought it was possible to check this with WMI but didn’t find a solution for it yet. So, if somebody can help me out with this, please let me know 😉

But again, you can download my full DBA Morning Checklist 3.0 from http://aka.ms/dbamorningchecklist Feel free to adjust J

Enjoy!

Pieter

How to perform index maintenance on Azure SQL Database – Reviewed

A few months, I wrote a blog post on how to perform index maintenance on Azure SQL Database.
I thought my solution with Ola Hallengren maintenance scripts worked fine, however, I got some comments on the post and on Twitter that there was an issue with my solution.

After analyzing and retesting it again, it turns out that there was indeed an issue. During the setup, I tried to create the IndexOptimize stored procedure in the master database but that is not allow.
So, I created a management database and created the stored procedures in that DB. Now I’ve noticed that when you do a select * from sys.databases in the user database, it only returns the database itself and the master…
It’s also documented here https://msdn.microsoft.com/en-us/library/ms178534.aspx#Anchor_1

This was an issue because the index optimize stored procedure is also using sys.databases to determine the databases. So, the runbook was working fine, except it was only doing an index optimize in my management DB and nothing more…

I’ve contacted Ola on this and he told me that sys.databases is not the only issue. The problem is also that Azure SQL Database does not support three-part-naming (DatabaseName.SchemaName.ObjectName). That means that it is not possible to have a stored procedure in one database, and do something in another database.

Workaround

After investigating, I’ve found a possible workaround for this issue.
The first step that you have to do is creating the IndexOptimize stored procedure in every user database.
Secondly, I’ve modified my runbook so that the selection of the user databases is not done with the stored procedure but inside the runbook.

I’m first going to connect to the master database to retrieve a list of all the online user databases.
Once I’ve got it, I connect to each of the user databases separately in a loop, and execute the index maintenance procedure.

This solution seems to be working so far. I know it is not (yet) as flexible as Ola’s scripts but it is certainly a good step in the right direction.
You can find the new script on the original post.

Good luck with it and if you have any issues with it, please let me know.

Pieter

How to view Azure SQL Database Audit Logs?

When you enable auditing on you Azure SQL Database, it will capture database events and it will write them to an audit log on your Azure Storage Account.
In fact, the audit is actually an Azure Table which will keep the data according to the retention period that you have specified. More information about auditing on Azure SQL Database can be found here.
The only “issue” with auditing is that you cannot read the Azure Store Table directly from the portal. You’ll have to use different tools to analyze your audit logs.

The Azure Portal Dashboard

The Azure Portal Dashboard gives you an overview and summary of the events that have happened on your database.
You can see the number of schema changes, login failures, data changes, stored procedures, …
This dashboard is only available in the new portal.

 

Azure Storage Explorer

The Azure Storage Explorer is a free tool that you can download from Codeplex which allows you to read the Azure audit log file in detail. With the Storage Explorer, you can see and configure all your storage account components.
Once you have installed the tool, open it and click on add account. Specify the storage account and the access key where the audit table is stored.


When you’re connected, you’ll see a folder Tables, where you can find your audit log. By default, the name of the audit log always has the same structure. It begins with “SQLDBAuditLogs” + “the name of your server” + “the name of your database” + “the date the audit was activated”.

When you click on the table, the audit log will open and you can see all the tracked events that have been logged.
Be aware that the eventtime might be different than the time where you are located. You also have to take into account that it can take a sometime to open the audit log if many events are tracked.
You also have the possibility to download the logs locally. Next to the eventtime you also see the server and database name, the application that executed the query, the type of the event and even the SQL statement or stored procedure that have been executed.
You can really see who did what and when on your database and can be useful to troubleshoot your database.

In the same audit log, you can also see the number of AffectedRows, ResponseRows and how many time it took for the server to execute the query.

Excel and Power Query

A template can be downloaded from the Azure Portal to help you analyze the audit logs. The Excel file allows you to connect to your Storage Account and use Power Query to import the log information.

I won’t explain how you can use the Excel file because Microsoft has already a detailed explanation in the Excel file itself

Power BI

Microsoft also released the Power BI Azure Audit Logs content pack which can help you easily analyze and visualize the wealth of information contained in these logs.
You just need to connect to your storage account through the content pack and a nice dashboard will be created for you automatically.

More information can be found on this MSDN blog post.

If you should have other ways to connect to the audit logs, please do share it with me. I’m always interested in it.

Pieter

4 reasons to subscribe for Speaker Idol at PASS Summit 2015

Last year, Denny Cherry (blog | twitter) organized for the first time a speaker idol at Pass Summit. And guess what, I won the contest last year and I’m really happy that I can present for the first time at the event.

This year, it might be your lucky day because Denny organizes the speaker idol again!

Let me give you 4 good reasons why you should subscribe for the contest and take your shot!

Learn

I’ve learned a lot during the contest. You have to present in front of a jury of “senior” and talented speakers like Tim Ford, Jason Strate, Kalen Lopez, Grant Fritchey, Joey d’Antoni, …
You only get 5 minutes to present so it’s really short. Don’t give an introduction of 2 minutes for example because those 5 minutes are over before you know it. Don’t go over the 5 minutes because you really have to stop. So, even if this a really short presentation, take your time to prepare good slides and rehearse it!
The jury will give you good tips and remarks on what was good and what you could do better. Listen to them and take their advice! If you should go to the next round, make sure that you use that advice.
Also listen to the other candidates and listen to the tips of the jury. You might have some new ideas and make your presentation even better!

You can win!

Last year I submitted 5 sessions for the event but none of them were selected…
I was going to the event anyway, so I thought, why not participate in the speaker idol. I had nothing to lose.
To my surprise, I won the contest. And yes, you can also win! Believe in yourself and give the best you can during 5 minutes.

This year, the prize is again a speaker slot for next year’s event! Imagine that you are the only person that is already 100% sure that you will speak on PASS Summit 2016! It’s a really good feeling believe me J. But on the other hand, it can make you nervous because you have a whole year to think of a good session.

Meet new people

On PASS Summit, you meet new people every day, but on the speakers idol, you meet your fellow contestants.
They are also nervous like you are J and it’s nice to see who those people are and what their experience is. It’s always nice again when you meet some of them again on other SQL Events.
During the finals, there were a lot of people in the room, which was really cool. People were interested in the contest and I expect more audience in the room this year.

It’s fun!

Believe me, it’s really fun to do. Last year, it was my first summit experience and it was all kind of new to me.
You feel the tension in the room, but on the other hand, you laugh a lot J

Well, I hope to see you all in my session and don’t forget “take your shot, you have nothing to lose!”

How to perform index maintenance on Azure SQL Database

It’s been a long time since I wrote a blog post and I finally found some time 😀 I’ve played a lot recently with the new Azure SQL Database V12 version. It’s really cool because it has a lot of improvements if you compare it to v11. With the V12 version, you have almost the same functionalities like an on premise database and it consists also new functionalities like row-level security, dynamic data masking and the query store. Just like an on premise database, you also have to maintain your indexes and statistics because indexes can also get fragmented on an Azure SQL Database. With an on premise server or a VM in Azure, most DBA’s schedule an index maintenance job under the SQL Server Agent. However, with Azure SQL Database, you have to think inside the database, which means you don’t have any server functionalities, including SQL Server Agent. Luckily, the Azure platform provides the Azure Automation, which can be used as a replacement for the SQL Server Agent. In this blog post, I’m going to explain how you can schedule your index maintenance job or any other job that needs to be scheduled.

Create a new Azure Automation Account

First of all, you have to begin with creating a new Azure Automation Account. Go to the portal and select the Automation tab. Click the Create button to create the new Automation Account. Fill in the name of the account and the region. Choose the region that is the closest to you. r

Create an Azure Automation credential asset

As prerequisite, create an Azure Automation credential asset that contains the username and password for the target Azure SQL DB logical server. Click on the Automation Account that you have just created and select Assets. In this section, select the button “Add Setting” at the bottom.

Select the option “Add Credential”


Select the Credential type “Windows PowerShell Credential” because we’re going to use this credential in the PowerShell Workflow. Give the Credential a name.

Specify the username and the password that you will link to the credential. This will be the user that will connect to your SQL Azure Database.

Click on OK and wait until the credential is created.

Install your maintenance scripts

Make sure that you have installed your maintenance scripts/procedures on all your Azure Databases. In my example, I’ve been using the maintenance scripts of Ola Hallengren. For the index maintenance, I have to install the scripts IndexOptimize.sql and CommandExecute.sql. Make sure you download the latest version because Ola fixed an issue with the index optimize on Azure SQL Database on July 19th 2015. There is a small issue with the scripts. Ola uses cross database stored procedures, which is not supported in Azure DB at the moment. So, the @Database parameter will not work correctly. Please also check the comments of this blog post. You have to implement a workaround in the runbook.

Import the Maintenance Runbook

We have now setup all the prerequisites so we can start with creating a runbook in our Azure Automation account. A runbook is a PowerShell Workflow that needs to be created or imported. You can actually compare it to configuring a job step in the SQL Server Agent job. The runbook contains the SQL Scripts that need to be executed for the index maintenance and will be scheduled later on. Select your Automation Account and go to the runbook tab. Click on the button “Import” at the bottom to upload your PowerShell Workflow. Select your PowerShell script that you have created and upload it.

Here is my script that I have used.

<#
.SYNOPSIS
 Perform index maintenance

.DESCRIPTION
 This runbook provides an example of how Azure Automation can be used to accomplish common SQL Agent tasks in the cloud. 
 As prerequisite, please create an Azure Automation credential asset that contains the username and password for the target Azure SQL DB logical server ($SqlServerName).
 Make sure that you have installed the scripts IndexOptimize.sql and CommandExecute.sql of Ola Hallengren (https://ola.hallengren.com/downloads.html)
 The check for the MAXDOP value in the IndexOptimize.sql script is using sys.dm_os_sys_info, which is currently not supported
 So be sure you disable that check otherwise it will return an error.
 
.EXAMPLE
 SQLServerIndexMaintenance
 
.NOTES
 AUTHOR: Pieter Vanhove
 LAST EDIT: October 20, 2015 
#>

workflow SQLServerIndexMaintenance
{
 param
 (
 # Fully-qualified name of the Azure DB server 
 [parameter(Mandatory=$true)] 
 [string] $SqlServerName,
 
 # Credentials for $SqlServerName stored as an Azure Automation credential asset
 # When using in the Azure Automation UI, please enter the name of the credential asset for the "Credential" parameter
 [parameter(Mandatory=$true)] 
 [PSCredential] $Credential
 )
 
 inlinescript{
 
 # Setup credentials 
 $ServerName = $Using:SqlServerName
 $UserId = $Using:Credential.UserName
 $Password = ($Using:Credential).GetNetworkCredential().Password
 
 
 # Create connection to Master DB
 $MasterDatabaseConnection = New-Object System.Data.SqlClient.SqlConnection
 $MasterDatabaseConnection.ConnectionString = "Server = $ServerName; Database = Master; User ID = $UserId; Password = $Password;"
 $MasterDatabaseConnection.Open();
 
 # Create command to query the current size of active databases in $ServerName
 $MasterDatabaseCommand = New-Object System.Data.SqlClient.SqlCommand
 $MasterDatabaseCommand.Connection = $MasterDatabaseConnection
 $MasterDatabaseCommand.CommandText = 
 "
 select name from sys.databases
 where state_desc='ONLINE'
 
 "
 # Execute reader and return tuples of results <database_name, SizeMB>
 $MasterDbResult = $MasterDatabaseCommand.ExecuteReader()
 
 # Proceed if there is at least one database
 if ($MasterDbResult.HasRows)
 {
 # Create connection for each individual database
 $DatabaseConnection = New-Object System.Data.SqlClient.SqlConnection
 $DatabaseCommand = New-Object System.Data.SqlClient.SqlCommand
 
 # Iterate through each database under $ServerName
 while($MasterDbResult.Read())
 {
 $DbName = $MasterDbResult[0]
 
 
 # Apply conditions for user databases (i.e., not master DB)
 if($DbName -ne "Master")
 {
 # Setup connection string for $DbName
 $DatabaseConnection.ConnectionString = "Server=$ServerName; Database=$DbName; User ID=$UserId; Password=$Password;"
 $DatabaseConnection.Open();
 
 # Create command for a specific database $DBName
 $DatabaseCommand.Connection = $DatabaseConnection
 
 Write-Output "Perform index maintenance on $DbName"

 # ExampleTable is a place holder for a table that holds a large volume of less important and expendable data
 # that can be truncated to save space on the database.

 $DatabaseCommand.CommandText ="
 EXECUTE dbo.IndexOptimize
 @Databases = '" + $DbName + "',
 @FragmentationLow = NULL,
 @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
 @FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
 @FragmentationLevel1 = 5,
 @FragmentationLevel2 = 30,
 @UpdateStatistics = 'ALL',
 @OnlyModifiedStatistics = 'Y'
 "
 Write-Output $DatabaseCommand.CommandText
 $NonQueryResult = $DatabaseCommand.ExecuteNonQuery()
 
 # Close connection to $DbName
 $DatabaseConnection.Close()
 
 }
 } 
 
 # Close connection to Master DB
 $MasterDatabaseConnection.Close() 
 } 
 }
}

It’s based on the script that I have found on Codeplex. As you will notice, I have specified 2 parameters. $SqlServerName: this is the server name where we want to perform the index maintenance $Credential: This is the username that will be used to connect to the SQL Server. This will be linked to the credential that we have created in step 2 The workflow is first going to connect to the master database to retrieve all the online databases. As mentioned the install maintenance script part, I had to find a workaround because you cannot use cross DB stored procedures on Azure DB, which are used by the Ola’s IndexOptimize script. Once I get the list of all the databases, I connect to each of the DB’s separatly and execute the index optimize. Please not that the @Databases parameter should contain the current DB name. This solution is not (yet) as flexible as Ola’s solution but it’s a good start. Once the import is done, you will notice that the column Authoring still has a “New” Status. The runbook still needs to be published (see next chapter)

Test your Workflow and Publish

Before you can actually start using this Workflow, you have to publish it, however, I recommend to first test this if everything works fine. Once the Runbook has been created, you can click on it and go the Author section


In the “Draft” section you can see the Workflow that you have just imported. Click on the button “Test” at the bottom to test your Runbook. Before the Runbook is actually executed, you have to provide the 2 parameters. In my example, the Credential will be the AzureDBCredential that I have created in step 2. The name of my SQL Server is called pvhv12.database.secure.windows.net. This is the “SQL Server” where my databases are running on.


If all goes well, you should see an empty output pane with a status: COMPLETED
If not everything goes well, you will notice the errors in the output pane. As soon as you have tested the job and everything works fine, you can publish the runbook and start using it.

Schedule the runbook

The last step is to schedule the published runbook. This is actually the same like in the SQL Server Agent. You have to link a schedule to the job or in this case the runbook. In the runbook, click on the section “Schedule” and select “Link to a New Schedule”
Type in the name of your schedule. Again, the same principle as the name of a job schedule of the SQL Server Agent
Secondly, you need to configure the schedule. Currently you can choose between One time, Hourly or Daily. In my example, I want my Index Maintenance on a daily basis starting at 23:00.

In the final step, you have to specify the runbook parameters that will be used when the job is executed. This will be the same values that I have used in the test phase. Once the schedule is created, your index maintenance will be done on a daily basis.

Verify job history

Just like in the SQL Server Agent, you can also verify the job history of a Runbook. You can select “Jobs” in your runbook to verify the job history. When you click on one of the jobs, you can even see the error output, job summary and the parameters that have been used.

Summary

So summarize I will make a comparison between the Azure Automation and the SQL Server Agent Job.

Azure Automation SQL Server Agent Job
Create an Azure Automation Account Create an SQL Server Agent Job
Create a runbook Create a Job Step
Test the runbook Start Job at Step
Publish the runbook Save the job
Schedule the runbook Schedule the SQL Server Agent Job
View jobs of the runbook View History

 

Piecemeal restore of a partitioned table

A few weeks ago I presented my Advanced Backup and Restore session on SQL Rally in Denmark.
One of the topics that I showed was the piecemeal restore functionality. This allows you to restore your database in “pieces”. The files or filegroups that have been restored are already accessible while the others are still restoring. Jes Borland (blog | twitter) has made a good blog post on this topic.
During the session, I got a question if it was possible to restore filegroups that were used by a partioned table, which should result that the table would be partially available.
My answer to that question was that you had to restore all the files before the table becomes available, but I figured out that my answer was not correct.

With this blog post I want to correct my answer.

Setup the test scenario

For testing purposes, I created a database, MyDatabase, with 5 filegroups: PRIMARY, Partition1, Partition2, Partition3 and Partition4.

Once the database was created I added my partition function and partition scheme.
As you can see, the partition function has 3 boundaries (1, 100, 1000) and the partition scheme is created that specifies the filegroups to hold each one of the four partitions.

CREATE PARTITION FUNCTION myRangePF1 (int)
AS RANGE LEFT FOR VALUES (1, 100, 1000);
GO
CREATE PARTITION SCHEME myRangePS1
AS PARTITION myRangePF1
TO (Partition1, Partition2, Partition3, Partition4);

Next step was creating a partitioned tabled based on the partition scheme

--Create Partitioned table
create TABLE PartitionedTable 
	(PartitionColumnID int not null,
	Name varchar(100) not null,
CONSTRAINT [PK_PartitionColumnID] PRIMARY KEY CLUSTERED 
(
	PartitionColumnID ASC	
) ON [myRangePS1]([PartitionColumnID])	
)ON myRangePS1(PartitionColumnID);

And fill the table with 1000 records

declare @i as int
set @i=1
while @i<=1000
	begin
		insert into PartitionedTable
		select @i,'Name' + CAST(@i as varchar)
		set @i=@i+1
	end

The query below give me the number of records per partition

SELECT $PARTITION.myRangePF1(PartitionColumnID) AS Partition, 
COUNT(*) AS [COUNT] FROM PartitionedTable
GROUP BY $PARTITION.myRangePF1(PartitionColumnID)
ORDER BY Partition ;
GO

Restore the partitioned table

Before I can start with a piecemeal restore, I need to have a backup of course of my database. Don’t forget to change the <BackupPath> in the script.

--Take a new FULL BACKUP
BACKUP DATABASE [MyDatabase]
TO DISK = N'<BackupPath>MyDatabase.bak' WITH COMPRESSION,INIT
GO
BACKUP LOG [MyDatabase] TO DISK=<BackupPath>MyDatabase_Log.trn' WITH FORMAT,INIT,COMPRESSION
GO

Ok, let’s drop the database and begin with the piecemeal restore

USE [master]
GO
ALTER DATABASE [MyDatabase] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
DROP DATABASE [MyDatabase]
GO

First of all, I’m going to restore the PRIMARY filegroup so that my database is already available and “ONLINE” again.

RESTORE DATABASE [MyDatabase] 
FILEGROUP = 'PRIMARY'
FROM DISK = N'<BackupPath>MyDatabase.bak'
WITH PARTIAL, NORECOVERY 
GO
RESTORE LOG [MyDatabase] FROM DISK='<BackupPath>MyDatabase_Log.trn' WITH RECOVERY
GO

When I check the status of my filegroups, you can see that my PRIMARY file is already ONLINE

SELECT [name], [state_desc] 
FROM [MyDatabase].sys.database_files;
GO

Secondly, I restore the Partition1 filegroup. Don’t forget to always apply the transaction log backup.

USE master;
GO
RESTORE DATABASE [MyDatabase] 
FILEGROUP = 'Partition1'
FROM DISK = N'<BackupPath>MyDatabase.bak'
WITH NORECOVERY
GO
RESTORE LOG [MyDatabase] FROM DISK='<BackupPath>MyDatabase_Log.trn' WITH RECOVERY
GO

When I checked the status of my filegroups again, the Partition1 filegroup is now ONLINE

I know that in this filegroup, there is 1 record stored (see “the number of records per partition”), so let’s see if I can query the table

use [MyDatabase]
GO
select PartitionColumnID,Name from PartitionedTable
GO

Well, I get an error that the second partition cannot be accessed, but if you look at the Results, the query returned 1 record!

I restored the Partition2 filegroup

USE master;
GO
RESTORE DATABASE [MyDatabase] 
FILEGROUP = 'Partition2'
FROM DISK = N'<BackupPath>MyDatabase.bak'
WITH NORECOVERY
GO
RESTORE LOG [MyDatabase] FROM DISK='<BackupPath>MyDatabase_Log.trn' WITH RECOVERY
GO

And executed the same query. I got again an error that Partiton3 filegroup was not accessible but the query returned 100 rows. This is correct because I know that there were 100 records in my first 2 partitions of the table.

Of course, if you’re using the correct where-clause and only query the ONLINE filegroups, you will not get any errors at all.

--Try to select the tables
use [MyDatabase]
GO
select PartitionColumnID,Name from PartitionedTable where PartitionColumnID<=100
GO

Finally, I also restored Partition3 and 4 and my table was completely available again.

Conclusion

My answer during my session was not correct. It is possible to restore a partitioned table in pieces and make part of the table available while the other filegroups are still in recovering state.
This can be really helpful to improve your RTO. Instead of restoring the complete table in 1 time, you can restore only the partitions that you really need to get your application back up and running.
The less important partitions, that for example only contain history data, can be restored in a later stage.

Well, as always, you’re never too old to learn something!

Time for a new challenge!

I’m excited to announce that on the 1st of April (not a joke J), I will be joining Microsoft as a Premier Field Engineer SQL Server. After 4,5 amazing years working for Kohera, it was time for a new challenge.
When I started my job as a consultant I knew only a little bit about SQL Server. I started working at a bank in Brussels for 1,5 years where I learned a lot about the daily DBA tasks.
After this “internal” job I really became a consultant because I started working on different shorter projects like consolidations and migrations, SQL audits, performance tunings, teaching customers and many more…
I would like to give a special thanks to Frederik Vandeputte and Koen Jacobs (both managing partners of Kohera) for the chances and opportunities that they gave me. It helped me a lot in improving my SQL skills. Thanks guys for the support and for believing in me!
Another thank you goes to my colleagues. I didn’t see them a lot but when we met, it was always fun to hear their SQL stories.

I’m really looking forward to start working for Microsoft! I’ve waited a long time for this, believe me J.
A PFE delivers Proactive (workshops, health checks, risk assessments) and Reactive (on-site support, troubleshooting) engagements and is a highly skilled engineer with deep technical expertise in a given technology.

More information on Premier Field Role details can be found here or watch the Microsoft Services – Premier Field Engineer Role video.

I still hope to do presentations and events for the SQL Community which I really like doing!

Well… I’ll see you around Europe!

Pieter

 

SQL Server 2012 SP2 CU4 breaks Data Collection

Last week, one of my clients upgraded their SQL Servers to SQL Server 2012 SP2 CU4.
After the upgrade, the internal DBA notified me that the data collection was not showing any Query Statistics anymore in the Management Data Warehouse report.
I verified the logs of the Query Statistics and saw that the Query Statistics Collection ran for a few minutes and then failed with the following errors:

The external columns for ODS – Get snapshot of dm_exec_requests are out of synchronization with the data source columns. The external column “command” needs to be updated.

Truncation may occur due to retrieving data from database column “command” with a length of 32 to data flow column “command” with a length of 16.

The collection never occurred correctly, so the upload to the MDW never happened. That’s the reason why the report was empty.
But why does the collection package starts to fail? Was the DMV dm_exec_requests updated so that the command column size was 32 characters instead of 16? I double checked with BOL and with an older version of SQL Server but the command field always had a length of 32.

Gianluca Sartori (blog | twitter) suggested on Twitter #sqlhelp to modify the SSIS packages to get this fixed. I downloaded the package from the server, modified the command field length to 32 characters and uploaded it again to the server. The QueryActivityCollect package was now running OK but the QuaryActivityUpload package now started to fail with same error on 3 different components (can’t remember which ones). I modified them too and finally I got everything working. … Well… not… Still no data on my report. I’ve noticed that the upload SSIS package was not loading the queries in the [snapshot].[query_stats] table anymore…

I was getting frustrated and stopped with modifying the packages. There must be a better way to fix this! So I came up with the following workaround.

  • I uninstalled the CU4. Yes, you can install SP’s and CU’s. Just go to Programs and Features in the control panel, select the option “View installed updates”, right click on the CU and select uninstall

     

  • Once the CU was uninstalled, my data collection started to work again. I download again the packages from the server and compared them with the CU4 package version.

    And guess what, the packages are changed!

    For example, when I compared the component ODS – Get snapshot of dm_exec_requests

    I get the following in the CU4 version

    While in the CU3 version you get the following:

    So there has definitely be “some” modifications. I’ve also verified the upload package and it’s the same issue.

     

  • I reinstalled CU4 again
  • Stopped the Query Statistics collection

  • I upload the CU3 SSIS package to the server with dtutil, removed all the cache files from the cache folder and restarted the Query Statistics Collection

Finally, my data collection started to work again…
I’ve opened a connect for this. Please vote and let’s hope Microsoft will fix this in the next CU J

Happy collecting!

Pieter

Speaking at SQL Saturday Exeter #372

2 times in a row! Yes, I’m selected again to speak on a very fine SQL Saturday in Exeter!
I had a great time last year and I’m really looking forward to go to the UK again. I’m going by car so if somebody needs a lift, just let me know.
The event will be held on 24th and 25th April 2015 at the Jurys Inn Hotel Exeter, like last year. Really nice hotel, believe me.

This year I’m going to talk about my DBA morning checklist.  In my job as a SQL Consultant, I have my weekly recurring customers. They don’t have a DBA, so I have to perform the daily DBA checklist on a weekly basis 😉 Initially, I’ve started with executing some T-SQL scripts to verify the SQL Servers. But as more servers were installed, my DBA checklist became really time consuming and I didn’t even have the time for a coffee… In this session you will learn how you can perform your DBA morning checklist while sipping coffee. I will demonstrate how you can use Policy Based Management to evaluate your servers and how I configured my setup. By the end of this session, I guarantee you will have more time for your morning coffee!

I have the first timeslot, from 08:00 AM till 08:50, so coffee will be necessary 😉
There are of course a lot of other interesting sessions that day.
On Friday, there will be a lot of full day trainings by William Durkin, David Peter Hansen, Steph Locke, Bob Duffy,  Tobiasz Koprowski, John Martin, André Kamman and Richard Douglas. More information and registration can be found here.

And looks like the pirates are coming this year  :D!

Hope to see you all there in April!

Pieter

Speaking at PASS SQLRally Nordic 2015

PASS SQLRally Nordic is back – bigger than ever! And it looks like they are going to be right! Well, if you look at the agenda, it will definitely be a top event!
There are a lot of international top speakers like Brent Ozar, Grant Fritchey, Itzik Ben-Gan and many more. But also other top industry speakers from across the Nordic region and the world for 3 days of SQL Server and BI training.
The event will be held on March 2-4, 2015, at Tivoli Congress Center in Copenhagen, Denmark. I’ve never been in Copenhagen before. So, I’m really looking forward to visit the city and catch a glimpse of the Little Mermaid.

I’m really happy and proud that I was selected as one of the speakers!
I’m going to talk about Advanced Backup and Restore. Most DBA’s know the standard options for implementing a basic backup and restore plan. But there are also not so well known options that can help you with a more advanced recovery plan and that will increase the speed of your backups and restores. My session is bulk loaded with demos and it will give you a good idea what infrequently features can be helpful in your environment, like partial backup, backup encryption, piecemeal restore … You will also learn how you can improve the performance of your backups! My timeslot is on the third day, Wednesday, March 4, from 12:45am – 01:45pm. The full agenda of the SQL Rally Nordic can be found here.

Well, I hope to see you all in Copenhagen in March!

Pieter

Restore System Databases

Did you ever had to restore your system databases? It’s not that difficult when the databases are still available (or at least some version of it). However, what do you need to do when all system databases are gone?
For example when your system database LUN is missing J. Now, Books Online is not really explaining this in detail so I’ve made this blog post to make it clearer.

Before you read on, please make sure that you have a full backup of all your system databases.

Restore MASTER database

SQL Server requires the master database before a SQL Server instance can run at all. Some version of a master database must exist. Without that “version” the service will not start and it will be impossible to restore the backup.
So, the first step is to rebuild the system databases.

To rebuild the system databases, you can follow these steps on http://msdn.microsoft.com/en-us/library/dd207003.aspx

Insert the SQL Server installation media into the disk drive or mount the iso-file.

From a command prompt window (run as administrator), enter the following command.

Setup /QUIET /ACTION=REBUILDDATABASE /INSTANCENAME=<InstanceName> /SQLSYSADMINACCOUNTS=<accounts>

<InstanceName> Is the name of the instance of SQL Server. For the default instance, enter MSSQLSERVER.
<accounts> Specifies the Windows groups or individual accounts to add to the sysadmin fixed server role.

When the rebuild is finished, you will find new (empty) data and log files in the data folder of the instance.

Once you have the “dummy” master.mdf and ldf file in place, you need to start up the instance in single user mode. This is done by adding the –m startup parameter. Open the SQL Server Configuration Manager to add it. (see screenshot below)

In the SQL Server Configuration Manager, start the SQL Server service. Once it is started we finally can restore the master database. The example starts sqlcmd and executes a RESTORE DATABASE statement that restores a full database backup of master from a disk device.

The REPLACE option will replace the current master database with the one in the backup file. Also notice that after the restore, the instance is shutting down. Remove the –m option again from the startup parameters before you restart SQL Server. You will see that all your user databases are back online and that all your server logins exist.

Restore MASTER database on a failover cluster

The procedure is slightly different than the restore of a stand-alone instance. The rebuild of the system databases must be performed on the active node and the SQL Server resource in the corresponding cluster application group must be taken offline before performing the procedure. I’m trying to restore the master database on a Windows 2012 R2 Failover Cluster and I’m running SQL Server 2012 SP2 Enterprise Edition.

I started with a rebuild of the system databases and started up the SQL Server instance with –m parameter (single user mode), as described above.

To restore the master database, I open a SQL CMD but I always get the error that only one administrator can connect to the instance. The problem is that as soon as I startup the service, the sp_server_diagnostics process kicks in and takes the first connection…
After searching for a while, I figured out that you have to startup a cluster instance in single user mode in a different way. More information can be found on http://msdn.microsoft.com/en-us/library/ms188236.aspx . Check the Note for Clustered Installations.

Restore MASTER database alternative

Another alternative method to restore the MASTER database could be the following:

  • Startup the instance with the new empty master database after the rebuild of the system databases
  • Restore the original master database as a user database (use a different name)
  • Shut down the service again
  • Remove the master.mdf and ldf files.
  • Rename the restored master database back to master.mdf and mastlog.ldf
  • Restart the service

The method above is also working but the above mentioned procedures are the best practices for SQL Server.

Restore MODEL database

The model database can be restored as a normal database. However, if the database should be corrupt, you have to start up the SQL Server Instance with trace flag –T3608. It prevents SQL Server from automatically starting and recovering any database except the master database.

To activate the trace flag, open the SQL Server Configuration manager and add it to the startup parameters.

(Re)start your SQL Server Instance. Do not forget to remove this trace flag after the model database has been restored.

Open the SQL Server Management Studio and restore the original mode database from the backup

Restore MSDB database

Before you can restore the MSDB, you’ll have to stop the SQL Server Agent. You can do this with the SQL Server Configuration manager or just right click on the agent in the SSMS and click stop

The MSDB database can be restored like a normal user database.

Once the restore is finished, you can restart the SQL Server Agent and you should see all your jobs again in the SSMS.

Restore Tempdb database

You cannot backup or restore the tempdb. This database is re-created every time an instance of SQL Server is started. When the server instance is shut down, any data in tempdb is deleted permanently.

If your tempdb is corrupt, just restart your SQL Server Instance. In case, your disk is corrupt, you will have to move your tempdb. More information on how to move system databases can be found here http://msdn.microsoft.com/en-us/library/ms345408.aspx

Restore RESOURCE database

It is a hidden database. The Resource database resides in the mssqlsystemresource.mdf file, which contains only code. Therefore, SQL Server cannot back up the Resource database. It can be restored by a file-level restore in Windows or by running the setup program for SQL Server.

 

HAPPY RESTORE!

Maximum Failover Limit

With this blog post, I will try save a few hours of your life!

I’ve recently setup a simple 2 node failover cluster that will be used for AlwaysOn Availability Group. After the installation, I suggested the customer to perform failover testing, not only to see the behavior of the Windows Failover Cluster but also to see how the application(s) would respond to a failover. One of the tests was disabling the Production LAN.
When I disabled the NIC on the node that was hosting the cluster group, it did a nice failover as expected.

I activated the NIC again and did the same test on the other node (which was hosting the cluster group now). To my surprise, there was no failover at all but the cluster name and the IP address just went offline

This is not what I expected.
I checked the cluster events and saw the following error “Cluster role ‘Cluster Group’ has exceeded its failover threshold”

That’s right…, I forgot all about that. The maximum failover limit is set to 1 time in 6 hours.
I knew that you had to change this for testing. This is really good explained in this blog post http://blogs.msdn.com/b/arvindsh/archive/2012/03/09/failover-cluster-group-maximum-failures-limit.aspx .
The blog is using Windows Server 2008, but my installation is using Windows Server 2012 R2. It really took me a long time to figure out how I could change the maximum failover limit of the cluster group in Windows Server 2012…

You have to right click on the cluster name and select the properties. Now if you look closely, there is a “link” in the first windows called “Manage Core Cluster Resource Group”.
I didn’t notice that link because most of the links are just to open the help files…

Go to the Failover tab, and there you will find the Maximum failures in the specified period. By default it is set to 1 in a period of 6 hours. This is not enough during failover cluster testing and will give you the failed/offline issue that I had.
Change it to a higher value, like 100 for example.

Don’t forget to set it back to the default value after you finish your tests

I did a few more tests and it was working fine now!

Well, I hope that I have saved a few hours of your life with this 😉

Limit RESTORE to a single database

A client came to me with the following question: “Is it possible to limit a login to restore only one specific database?”
Apparently, the application that he was trying to upgrade took a backup of the database before the upgrade. In case of failure, the upgrade procedure would automatically restore the backup.
Good idea to restrict that user but I don’t know by hart if that is possible….

Let’s figure it out!

I’ve created a test login “RestoreUser” that isn’t a member of any server role, except Public. The login is linked to a database user that is member of the db_owner role of my Sales database.

Looks like I’m not able to perform a restore of the database with the current user rights.

dbcreator

If we check the server roles, there might be an option to use the dbcreator role. However, if we look at the description…

Members of the dbcreator fixed server role can create, alter, drop, and restore any database.

There might be an issue as we don’t want the user to create or restore other databases.
My test results confirmed this, so dbcreator role … no option…

Custom Server role

The database was running on a SQL Server 2012. Maybe I can create a new server role to put a restriction on the restore and add my login into that group.
The only option that looked interesting was the CREATE and ALTER DATABASE option. No RESTORE database option could be selected and the other roles were not relevant.


When I tried the restore, it didn’t work either. And even if it should have worked, the description mentions “any” database so again the same issue as the dbcreator role…

DDL Trigger

Maybe I can capture the RESTORE command in a DDL Trigger? In the trigger, I could check the user and the database that is been restored.
I’ve checked the DDL Events that are possible and the RESTORE command is not a DDL Event…

DDL trigger…? No option…

Database Owner

If you check the RESTORE command on Books Online and look for the security part, you will see the following:

If the database exists, RESTORE permissions default to members of the sysadmin and dbcreator fixed server roles and the owner (dbo) of the database

We already know that the sysadmin and dbcreator are not an option, but let’s try “the owner of the database” option.

To change the owner of the database, you can execute this command. Of course, change the name of the database and the user to your database name and user.

ALTER AUTHORIZATION ON DATABASE::SalesDB TO RestoreUser;

Well… the restore of my database succeeded!

And if I try to restore another database it didn’t work.

Mission accomplished!

Is it wrong to change the Database Owner?

The database owner gets full privileges on the database itself. So be careful if you’re changing the database owner, that the former database owner can still access the database.
In this post of Brent Ozar, there is a check that the database owner should be sa. But in fact, it’s not wrong to have another database owner than sa.
It’s even recommended to create a dedicated account with zero privileges as the database owner. Check this post of Andreas Wolter on SQL Server Database Ownership.

Now, to come back to my situation. It’s not wrong to have my RestoreUser as the database owner. The user had already full privileges because it was member of the db_owner role.

Conclusion

Changing the database owner will allow the user to restore the just that single database. Other restores and creation of new databases will not be possible.
There are a few restrictions that you have to keep in mind. The database must already exist. If it doesn’t you must have CREATE DATABASE permission to restore the database. As a consequence, the user will be able to create any other database as well.
This option will also not work if you don’t allow database ownership for that user. If that’s not possible, you’re out of options.

I don’t know if there are other possibilities to restrict the user to restore 1 single database. If you do know, just drop me a comment 😉

Happy RESTORE!

Remove Backup files from Azure Storage Part 2

In April this year I’ve posted a script to delete your backup files from Windows Azure Storage.
This week, I’ve got a question from Mark, how you could use this script in a SQL Agent Job on an Azure VM. I mentioned in my previous blog post that you should download the Azure PowerShell cmdlets before you could use the script. But Mark always got the following error when he was trying to download the installation file.


In this blog post I will explain more into detail how you can configure the script correctly in a SQL Server Agent job and how you should install the Azure Powershell Cmdlets.

Startup your VM on Azure and connect to the instance with the SSMS.
Before we install anything extra on the VM, let’s configure a job with the Removal Script that you can find in my previous post. Don’t forget to change the storage account, the access key and the container in the script into your configuration.
Your job step should look like this. Choose the Type=Powershell and not T-SQL.


When you execute the job manually, you will probably get the following error message The term ‘New-AzureStorageContext’ is not recognized as the name of a cmdlet…


This is because we didn’t install the Azure PowerShell cmdlets yet. You can download the cmdlets here. Now, when you try to download this on the Azure VM, you will get the following error:


This is because your browser settings don’t allow you to download files. To fix this, open your Internet Explorer settings and select Internet Options


Select the security tab and click the button Custom level

Scroll down to Downloads and select the option Enable under “File download”


Click OK, close the settings and restart your browser. Now you should be able to download the file to install the Azure PowerShell Cmdlets


Save it and Run the file

First of all you will get this nice welcome screen…

Wait until it is finished and you see the next screen.
Just click Install

You will see that it will install the Microsoft Azure Powershell. Click “I accept” in the next screen

…and wait till the installation is finished

Click finish at the end

In the final screen you should actually see that the Azure Powershell Cmdlets are installed

Ok… that’s it!!!

Go back to the SQL Agent Job and execute it again. Normally the job should execute successfully!

To be 100% sure that it’s working, go to the Management portal of Windows Azure and verify the container if the files have been removed.

Have fun with it!

Cloud Witness in Windows Server v.Next

Last week, I attended the session “Did you vote today? A DBA’s guide to cluster quorum” at PASS Summit, by Allan Hirt.
Interesting stuff, and I learned that my quorum configuration is already done correctly according to what Allan explained.

Another interesting part was that Allan announced that there is a new type of quorum in Windows Server v.Next.

Cloud Witness!

Instead of using a File Share Witness or Disk Witness, you can now also choose Cloud Witness.
Basically it’s going to create a file on your Azure Storage which counts as a quorum vote.
This cloud witness can be really helpful in case of multi-site clusters.
With multi-site clusters, there is always the question in which datacenter you are going to put the file share witness.
In fact, to configure it correctly, you should put your file share witness in a third datacenter. But that’s really too expensive for most companies just for a file share witness. The cloud witness provides a good alternative for this problem.

The only “issue” that I see with this cloud witness, is that your cluster nodes must have connection with the internet. And honestly, I haven’t seen much SQL Servers or cluster nodes that are able to connect to the internet.
But with the appropriate firewall settings, you should be OK.

I’ve already installed a 2 node Windows vNext Failover Cluster and tried it out.
It’s actually really easy to configure.

Start with opening the Failover Cluster Manager and connect to your cluster.
Right click on the cluster name à More Actions à Configure Cluster Quorum Settings…

In the “Select Quorum Configuration Option” windows, select “Select the quorum witness”

In the “Select Quorum Witness” windows, select “Configure a cloud witness”

To configure the cloud witness, you need to specify your storage account and your Azure storage account key.

This information can be found on the Azure Portal. Just go to the storage option. On the bottom of your screen you will see a button called Manage Access Keys.

Click on that button, copy one of the 2 keys and paste it in the Azure storage field of the cloud witness configuration

Your configuration should look similar like this screen shot below

Finally, complete the wizard and if all went well you have now configured your cloud witness.

When you look at your storage account in the Azure Portal, you will notice that a new container, “msft-cloud-witness”, is created with 1 blob file inside.

Pretty cool if you ask me 😀 !

Pieter

 

PASS Summit My First Time

Only 2 weeks left until PASS Summit and believe it or not, it will be my first time at PASS Summit.
I was rewarded last year as an MVP for the first time and the MVP Summit is just prior to the PASS Summit event. This is an ideal combination and makes it affordable for me to go to Seattle and combine the 2 events!

I’m really looking forward to PASS Summit. Don’t know really what to expect so I read already some good blog posts that explain a lot for the first-timers.

The web site of Summit also has a really good explanation for the first-timers and Denny Cherry recorded a webinar where he explains how to get to Seattle, what you can do as a Tourist, the Convention Center,… PASS is also organizing a “First Timer’s Orientation Meeting & Speed Networking” that will bring the first-timers up to speed. I will attend that meeting for sure 😉

What I definitely know is that I will meet again a lot of people of the SQL Server Community that I have met before when I was speaking on SQL Saturdays and on SQL Server Days in Belgium. Will be fun, believe me!

I’m member of the SQLUG in Belgium and each year we’re organizing SQL Server Days, the biggest SQL event of the BeNeLux. This year, it was already our 7th edition which was a great success.
Although we had really good feedback of our speakers and attendees, you always want to push your event to a higher level. So I will keep my eyes and ears wide open during these 3 days and I’m sure I will have some great new ideas that we can use on SQL Server Days next year.

This year I’ve also started submitting sessions for SQL Saturday’s in Europe. I got selected to speak at Slovenia, Exeter, Oslo, Holland and I even did my first pre-con on SQL Server Days.
I’ve also submitted my sessions for PASS Summit. It would have been great to speak at “the Rock Werchter” of all SQL Events but unfortunately, none of my sessions got selected…

No worries, I got really good feedback from PASS so maybe I’ll be selected next year…

But…then… Mr. Denny announced his Speaker Idol Competition. Let me quote Denny what this contest is all about:

“The goal behind the Speaker Idol competition is to give speakers who are known in their region a chance to be seen at the international level so that they can improve their chances of being selected to speak in future years, to get some real time feedback from other speakers about their presentation style, and so that they can improve as speakers.”

THAT’S ME! I didn’t hesitate and registered for the competition. And guess what… I’m one of the 12 contestants! I really appreciate this opportunity. I’m speaking (for 5 minutes) on Wednesday, so let’s hope I can convince the “jury” and win the contest. Wish me luck J !

I’ve heard that there are also a lot of other activities during the evenings and night like a SQL Long Run, SQL Karaoke, Kilt Thursday … More info can be found on http://www.sqlpass.org/summit/2014/Connect/AfterHours.aspx

I’m definitely going to the SQL karaoke! Not that I’m a really good singer but sounds like fun J!

Well… let my US adventure begin and see you all within 2 weeks in Seattle at PASS SUMMIT 2014!

Pieter

Enable a SQL Job Unattended

Today a customer asked me to disable a daily load on their test database. I disabled the job but my problem was that I couldn’t enable the job again because I’m only once a week at that customer…
I found this little trick that I want to share with you to activate the daily job unattended.

First of all I scripted the enabling of the “daily job”. Just open the job properties à Check Enabled and hit the Script button.

USE [msdb]
GO

EXEC msdb.dbo.sp_update_job
@job_id=N'bfd08f62-4074-4f83-bebf-c9d630037595',
        @enabled=1

Secondly, I scheduled a new job “Enable daily job” that will activate the daily job the day after it has been disabled.

The job step will be the T-SQL statement that you have scripted.
You need to schedule this job after the daily job time (but has been disabled) but before the next run. Make sure that the Schedule type is set to “One Time” as we only want to activate that job just once 😉

Select Notifications and activate the checkbox “Automatically delete job: When the job succeeds”. We don’t need this “Enable Daily job” anymore once the “Daily job” has been activated.

Easy to configure, the daily job has been enabled unattended and I automatically cleanup my solution. Works like a charm and it is one reminder less in my agenda!

 

How I completely remove a SQL Server database – the right way!

I was shocked today!
I found this blog post on how you should remove a SQL Server database. I didn’t know if I should started crying or laughing…
This blogpost might be ridiculous for some of you but looks like it might be useful after all.

There are 2 possibilities to delete a database

GUI

With the SQL Server Management Studio you can right click on the database and select “Delete”

In the delete object window select the option “Delete backup and restore history information for databases” if you want to remove this information.
If you want to kick out open connections to your database select the “Close existing connections”. It will be impossible to remove the database if you don’t select the last option and there are still open connections to your database. You will receive an error that the database is still in use and cannot be deleted.

Once you hit the OK button, the database will be removed of the SQL instance and the database files on the OS level will also be removed. Definitely not necessary to shut down the whole instance to remove a database…

T-SQL

The script below does exactly the same as the actions that were taken in the GUI and have the same result. The database will be removed!

--This will remove the backup history
EXEC msdb.dbo.sp_delete_database_backuphistory @database_name = N'DBToDelete'
GO
USE [master]
GO
--This will drop all the open connections
ALTER DATABASE [DBToDelete] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
USE [master]
GO
--This will remove my database
DROP DATABASE [DBToDelete]
GO

Now after the removal you still have some extra cleanup stuff to do that people often forget…

Delete the jobs

Delete the jobs that were related to the database. If you will not remove them, the jobs will fail and you will receive unnecessary alerts.

Delete the backup files

If you don’t need the backup files anymore, just remove them. However, I would recommend to keep the last full backup of the database and archive it for at least a year or 2. You never know that somebody needs the data in the future… J

Delete the logins without DB user

Your database had probably some database users configured that were linked to a server login.
If that server login isn’t used for any other database user and isn’t member of any server role besides public, I would recommend to remove that login. Not only for security reasons but also to keep your server clean.

Well, I hope that this can help some junior DBA’s and that they find my blog post instead of the other one J

Pieter

Remove Azure Subscriptions from SSMS

Last week I was preparing my pre-con for SQL Server Days.
I was playing around a lot with the new Azure features because that will be a big part of my pre-con.
When I was trying to deploy a database to my Azure VM with the new “Deploy database to Azure VM” wizard,
I noticed that there were a lot of double subscriptions that I could use. This was probably caused by the testing that I have done lately 😉
Adding a subscription is easy but I couldn’t find any way to remove a subscription from the dropdownbox.

Double Subscriptions

As an MVP, you get the privilege to ask questions to other SQL MVP’s and the Microsoft Product Team.
So, I posted this question and got quickly response from a Microsoft SQL Server Program Manager.
The solution is public information, so I’m happy to share it with you.

The references to the subscriptions are stored under the following directory

%USERPROFILE%\Documents\SQL Server Management Studio\Deploy DB to Windows Azure VM

Files

I removed them and went back to my deploy wizard but I still got one subscription available.
Default file

Apparently, that is my active subscription.
To get rid of this active subscription, you need to update the deploymentsettings.xml file in the folder

%USERPROFILE%\AppData\Local\SQL Server\Deploy to SQL in WA VM

When you should open the xml file you will see indeed the subscription.

After removing the deploymentsettings.xml file, I got my “clean” dropdownbox back.

This is something that you won’t need a lot but always good to know!

Pieter

Speaking at SQL Server Days 2014

On the 30th of September and the 1st of October, the Belgian SQL Server User Group is organizing for the 7th time SQL SERVER DAYS.
Over the last few years SQL Server Days has become more and more famous in the SQL Server Community, and that’s really a fact if you look at the impressive conference schedule!

This year SQL Server Days 2014 will again be a two-day conference.
On the first day, there are deep dive trainings given by international and national speakers.
The second day will be the main technical conference for database administrators, developers and business intelligence professionals!

4 years ago I went to SQL Server Days as an attendee, the other 3 previous years I was selected to give a regular session.
But this year, I’m really honored to present my first pre-con “SQL Server 2014: What’s in it for you?
I’ve recently wrote a blog post why it’s so hard to convince people to upgrade to SQL Server 2014.
Well, in this pre-conference, I will dig into the new features of SQL Server 2014.
I will put a strong focus on the new Windows Azure and hybrid cloud functionalities and of course other features
like In-memory OLTP engine, Buffer Pool Extension, Enhanced Windows Server 2012 Integration, and many more…
The session will be bulk loaded with demos and hopefully I can convince people to upgrade 😉

Again, I’m really honored to be part of the event and really looking forward to meet some fine other SQL community speakers like,
Grant Fritchey, Bob Duffy, Denny Cherry, Joey d’Antoni and many more!
SQL Server Days 2014 is hosted at San Marco Village, which is an excellent venue close to Antwerp.
You really don’t want to miss this event and the opportunity for networking with the Microsoft Data Platform professionals!

You can still register here.

Well, I hope to see you all at SQL Server Days! Really looking forward to it!

Last known good DBCC CHECKDB not always correct

On the internet you can find a lot of methods to find out when DBCC CHECKDB has last run on your databases. The 2 most common queries are the based on DBCC DBINFO() and DBCC PAGE.
Below you can find 2 examples of this (replace “YourDatabase” with the correct database name):

USE YourDatabase
GO
CREATE TABLE #tmp
	(ParentObject varchar(1000) NULL,Object varchar(1000) NULL,Field varchar(1000) NULL,Value varchar(1000) NULL)
insert into #tmp
EXEC ('DBCC DBINFO() WITH TABLERESULTS')
select cast(value as datetime) from #tmp where field='dbi_dbcclastknowngood'
drop table #tmp

OR

DBCC TRACEON (3604); 
GO
DBCC PAGE (YourDatabase, 1, 9, 3);
GO

In my morning DBA checklist, I’m using the first query to check if there has been a database integrity check for the last 7 days. This query works well, however, last week I’ve noticed an issue. As soon as a database is in read-only or emergency mode, the DBCC CHECKDB command will succeed but the dbi_dbcclastknowngood field will (or can) not be updated. For that reason, my check indicates a failed status for my read-only databases…

Let’s take an example. First of all, I’m going to perform a DBCC CHECKDB for my AdventureWorks database

DBCC CHECKDB(AdventureWorks2012)

Once the check is finished, let’s verify the last dbcc check date with the DBCC DBINFO() query.

DBCC

Now, let’s set the database to read-only and do the test again.

DBCC read only

As you might notice, the last DBCC CHECKDB time is the same in both cases.

I found a good tip (old blog post) of Jonathan Kehayias to find the correct information. Just scan the SQL Error Log for the last 7 days. If I check the logs from my previous tests, you can see that there was a DBCC CHECKDB at 14:57:22 which was on my read-only database.

Error Log

I’ve modified the script of Jonathan. I’m using the xp_readerrorlog and I’m not checking the SQL 2000 version anymore… I’ve also changed my DBCC Check in my DBA checklist with the query below.

CREATE TABLE #ErrorLog
	(
	  Logdate datetime ,
	  ProcessInfo VARCHAR(12) ,
	  MsgText VARCHAR(3000)
	)
CREATE TABLE #NumberOfErrorLogFiles
	(
		ArchiveID tinyint,
		FileDate datetime,
		FileSize int
	)
insert into #NumberOfErrorLogFiles
EXEC master..xp_enumerrorlogs

DECLARE @SinceDate datetime  
SET @SinceDate = DATEADD(dd, -7, GETDATE())
DECLARE @ErrorFile tinyint
DECLARE @Severity tinyint
DECLARE @SeverityString nvarchar(3)
DECLARE @LastLogFile tinyint
select @LastLogFile=min(ArchiveID) from #NumberOfErrorLogFiles
where FileDate<@SinceDate
DECLARE @DBCCStatement nvarchar(1024)
set @DBCCStatement=N'DBCC CHECKDB (' + DB_NAME() + ')'

set @ErrorFile=0
while @ErrorFile<=ISNULL(@LastLogFile,0)
begin
    insert into #Errorlog
	EXEC master.dbo.xp_readerrorlog @ErrorFile, 1, @DBCCStatement, NULL, @SinceDate, NULL, N'ASC'
	set @ErrorFile=@ErrorFile+1
end

select * from #ErrorLog where Logdate>@SinceDate

drop table #ErrorLog
drop table #NumberOfErrorLogFiles

Have fun!

Pieter

Speaking at SQL Saturday Oslo #317

To end the holiday period in Belgium in style, I’m going on a trip to Oslo.
Not on vacation but to speak on the first edition of SQL Saturday!
Well, in some cases, this might be considered as a small vacation 😉

I’m very honored and exited that my session “How to manage many SQL Servers the easy way” has been selected.
It was really an impressive list of submitted session but my session made it!
It’s the second time that I’ll be presenting this session on a SQL Saturday.
First time was on SQL Saturday Exeter a few months ago and got really good feedback!
The purpose of this sessions is to point out which SQL Server tools you can use for multi-server management.
It is bulk loaded with demos and it will give you a good idea what features can be helpful in your environment.

I’m really looking forward to meet Cathrine Willemsen (twitter | blog) and Tone Hansson (twitter | blog), the organizers of the event, who are doing an excellent job!
A lot of other great SQL names are on the schedule. Looking forward to meet some new speakers like Tim Ford, Bob Duffy,Niko Neugebauer, Régis Baccaro, Kalen Delaney and many others.
And of course the other international speakers that I have met before like Boris Hristov, Matija Lah, André Kamman, Marco Russo, …

Hope to see you all there!

Tracking login and user changes in SQL Server

Did you ever wanted to track SQL Server logins and database user changes on all your SQL Servers? At one of my customers, the IT guys are sysadmins on the SQL Servers. In fact, the IT department is managing the SQL Servers when I’m not around. One day, “somebody” removed a user and of course, nobody did it 😉

First thing that came up to me was implementing SQL Server Audit. Good solution but not all versions of SQL Server are supporting SQL Server Audit. SQL Server 2008 and R2 are supporting SQL audit only in Enterprise Edition and there is no audit at all on SQL Server 2005. So, I had to come up with a solution that worked on these versions as well. My solution is based on this blog post of Aaron Bertrand (twitter | blog) but I’ve extended it.

Server Audit

I’ve implemented SQL Server Audit on all SQL Server 2008 and R2 Enterprise servers and SQL 2012 servers. Unfortunately no SQL 2014 yet, but you should also implement SQL Audit on this version The audit is dumping all the audit information into a audit log file on the server.

USE [master]
GO

CREATE SERVER AUDIT [Audit Login Changes]
TO FILE 
(	FILEPATH = N'ENTERYOURFILEPATHHERE'
	,MAXSIZE = 1024 MB
	,MAX_FILES = 10
	,RESERVE_DISK_SPACE = OFF
)
WITH
(	QUEUE_DELAY = 1000
	,ON_FAILURE = CONTINUE
	,AUDIT_GUID = '33e055a9-1924-47c3-9798-e8875a5bcd42'
)
ALTER SERVER AUDIT [Audit Login Changes] WITH (STATE = ON)
GO

 

I defined following Audit Action Types because I wanted to know all changes made on logins, roles and users.

Audit Action Type Description
DATABASE_ROLE_MEMBER_CHANGE_GROUP Raised whenever a login is added to or removed from a database role
SERVER_ROLE_MEMBER_CHANGE_GROUP Raised whenever a login is added or removed from a fixed server role
DATABASE_PERMISSION_CHANGE_GROUP Raised for any database permission
SERVER_OBJECT_PERMISSION_CHANGE_GROUP Raised whenever a GRANT, REVOKE, or DENY is issued for a server object permission
SERVER_PERMISSION_CHANGE_GROUP Raised when a GRANT, REVOKE, or DENY is issued for permissions in the server scope, such as creating a login
DATABASE_PRINCIPAL_CHANGE_GROUP Raised when users, are created, altered, or dropped from a database
SERVER_PRINCIPAL_CHANGE_GROUP Raised when server principals are created, altered, or dropped

More information on the Audit Action Types can be found here.

USE [master]
GO

CREATE SERVER AUDIT SPECIFICATION [Audit Login]
FOR SERVER AUDIT [Audit Login Changes]
ADD (DATABASE_ROLE_MEMBER_CHANGE_GROUP),
ADD (SERVER_ROLE_MEMBER_CHANGE_GROUP),
ADD (DATABASE_PERMISSION_CHANGE_GROUP),
ADD (SERVER_OBJECT_PERMISSION_CHANGE_GROUP),
ADD (SERVER_PERMISSION_CHANGE_GROUP),
ADD (DATABASE_PRINCIPAL_CHANGE_GROUP),
ADD (SERVER_PRINCIPAL_CHANGE_GROUP)
WITH (STATE = ON)
GO

Event Notifications

As mentioned before, we cannot use SQL Audit for SQL 2008 and R2 Standard Edition or SQL 2005. As an alternative, you can setup event notifications which will capture messages via Service Broker. The scripts below are based on the scripts of Aaron but I’ve added more events to it as I wanted to trace more than just “change password”

Create the following table in the msdb database

USE [msdb];
GO

CREATE TABLE dbo.SecurityChangeLog
(
    ChangeLogID			 int IDENTITY(1,1),
	LoginName            SYSNAME,
    UserName             SYSNAME,
    DatabaseName         SYSNAME,
    SchemaName           SYSNAME,
    ObjectName           SYSNAME,
    ObjectType           VARCHAR(50),
    DDLCommand           VARCHAR(MAX),
	EventTime			 DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
	CONSTRAINT PK_ChangeLogID PRIMARY KEY (ChangeLogID)
);

 

Setup the queue

CREATE QUEUE SecurityChangeQueue;
GO

CREATE SERVICE SecurityChangeService ON QUEUE SecurityChangeQueue
  ([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]);
GO

 

Setup the event notificiation. If you check the “FOR”-clause, you will notice that these are the same actions as defined in the SQL Audit Specification.

CREATE EVENT NOTIFICATION CreateLoginNotification
    ON SERVER WITH FAN_IN
    FOR CREATE_LOGIN,ALTER_LOGIN,DROP_LOGIN,CREATE_USER,ALTER_USER,DROP_USER,ADD_SERVER_ROLE_MEMBER,DROP_SERVER_ROLE_MEMBER,ADD_ROLE_MEMBER,DROP_ROLE_MEMBER
    TO SERVICE 'SecurityChangeService', 'current database';
GO

 

Install the following stored procedure to log all the event notifications into the table we’ve just created. You might notice in the loop that I’m checking the version of SQL Server for some events. This is because the event notification content is different for SQL 2008 (R2) and SQL 2005.

USE [msdb];
GO
CREATE PROCEDURE [dbo].[LogSecurityChange]
WITH EXECUTE AS OWNER
AS
BEGIN
    SET NOCOUNT ON;
	DECLARE @version int
    DECLARE @message_body XML;
	set @version = (SELECT convert (int,REPLACE (LEFT (CONVERT (varchar, SERVERPROPERTY ('ProductVersion')),2), '.', '')))

    WHILE (1 = 1)
    BEGIN
       WAITFOR 
       ( 
         RECEIVE TOP(1) @message_body = message_body
         FROM dbo.SecurityChangeQueue
       ), TIMEOUT 1000;

       IF (@@ROWCOUNT = 1)
       BEGIN
		if CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/EventType)')) in ('DROP_USER','CREATE_USER','ALTER_USER') or @version>9
		BEGIN
			INSERT dbo.SecurityChangeLog(LoginName,UserName,DatabaseName,SchemaName,ObjectName,ObjectType,DDLCommand) 
			SELECT CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/LoginName)')), 
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/UserName)')),
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/DatabaseName)')),
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/DefaultSchema)')),
			    CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/ObjectName)')),
		        CONVERT(VARCHAR(50), @message_body.query('data(/EVENT_INSTANCE/ObjectType)')),
		        CONVERT(VARCHAR(MAX), @message_body.query('data(/EVENT_INSTANCE/TSQLCommand/CommandText)'))
		END
		ELSE
		BEGIN
			INSERT dbo.SecurityChangeLog(LoginName,UserName,DatabaseName,SchemaName,ObjectName,ObjectType,DDLCommand) 
			SELECT CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/LoginName)')), 
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/UserName)')),
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/DatabaseName)')),
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/SchemaName)')),
			    CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/ObjectName)')),
		        CONVERT(VARCHAR(50), @message_body.query('data(/EVENT_INSTANCE/ObjectType)')),
		        CONVERT(VARCHAR(MAX), @message_body.query('data(/EVENT_INSTANCE/EventType)')) + ' ' + 
				CONVERT(VARCHAR(MAX), @message_body.query('data(/EVENT_INSTANCE/RoleName)')) + ' FOR ' +
				CONVERT(VARCHAR(MAX), @message_body.query('data(/EVENT_INSTANCE/LoginType)')) + ' ' +
				CONVERT(VARCHAR(MAX), @message_body.query('data(/EVENT_INSTANCE/ObjectName)'))
		END
       END
    END
END

 

Last step is modifying the queue so that it will use the stored procedure and starts tracking the login and user changes.

ALTER QUEUE SecurityChangeQueue
WITH ACTIVATION
(
   STATUS = ON,
   PROCEDURE_NAME = dbo.LogSecurityChange,
   MAX_QUEUE_READERS = 1,
   EXECUTE AS OWNER
);
GO

Bring it all together

All the information is now logged on each individual server. I want all the audit log information in one central database so that the customer can run reports on it. First of all, I’ve created a database on one of the SQL Servers and created a table that will store all the audit logs of all the servers.

CREATE TABLE [dbo].[SecurityChangeLog](
	[ChangeLogID] [int] IDENTITY(1,1) NOT NULL,
	[ServerName] [sysname] NOT NULL,
	[LoginName] [sysname] NOT NULL,
	[DatabaseName] [sysname] NOT NULL,
	[ObjectName] [nvarchar](128) NULL,
	[ObjectType] [varchar](50) NULL,
	[DDLCommand] [nvarchar](max) NULL,
	[EventTime] [datetime2](7) NULL,
 CONSTRAINT [PK_ChangeLogID] PRIMARY KEY CLUSTERED 
(
	[ChangeLogID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

 

Next step is retrieving all the log information from each server. I was thinking of using an SSIS package for this but it looked more easily to use Powershell instead. The script below is based on the central management server that I’m using. First I’m retrieving all the SQL Servers where I have to collect the audit data. This is stored in the table msdb.dbo.sysmanagement_shared_registered_servers_internal of the central management server. Make sure that you change YOURGROUPID with the appropriate groupid of your CMS and YOURCMS with the instance name of your CMS.

Once we have the list, I execute a script GetAuditLog.sql against each instance to retrieve the audit information.

The final step will write the retrieved audit information into the database. Make sure you change the YOURDATABASENAME and YOURSQLSERVERINSTANCE in the script to your database name and SQL instance.

Schedule a SQL Agent Job to execute the powershell script on a daily basis.

 

#Set-ExecutionPolicy -ExecutionPolicy Unrestricted
Import-Module “sqlps” -DisableNameChecking
cls
$InputFile = "YOURFILEPATH + GetAuditLog.sql"

#Get the list of servers from the Central Management Server
$InstanceNameList = Invoke-Sqlcmd -Query "select server_name from msdb.dbo.sysmanagement_shared_registered_servers_internal 
                                          where server_group_id=YOURGROUPID" -ServerInstance "YOURCMS"
                                         
Clear-variable -Name results
Clear-variable -Name record

#Get Audit information
foreach($instanceName in $instanceNameList) 
{ 
    write-host "Executing query against server: " $instanceName.server_name
    $results += Invoke-Sqlcmd -InputFile $InputFile -ServerInstance $instanceName.server_name
}
#Save the Audit Log into the database
foreach($record in $results)
{
#write-host "ServerName: " $record.ServerName " " $record.DDLCommand
Invoke-Sqlcmd -Query "insert into [YOURDATABASENAME].[dbo].[SecurityChangeLog] ([ServerName],[LoginName],[DatabaseName],[ObjectName],[ObjectType],[DDLCommand],[EventTime])  
                      Values('$($record.ServerName)','$($record.LoginName)','$($record.DatabaseName)','$($record.ObjectName)','$($record.ObjectType)','$($record.DDLCommand)','$($record.EventTime)')" -ServerInstance "YOURSQLSERVERINSTANCE" 
}

 

Below you can find the SQL script that is used to retrieve the audit information in the powershell script. Dependent on the version we will read the audit file or the event notification table. I’m only selecting the logs of the last day because the powershell is scheduled to run on a daily basis.

USE MSDB
GO
declare @AuditLog  table (
LoginName sysname,
DatabaseName sysname,
ObjectName nvarchar(128),
ObjectType varchar(50),
DDLCommand nvarchar(4000),
EventTime datetime)
declare @LogFilePath varchar(256)
declare @version int
set @version = (SELECT convert (int,REPLACE (LEFT (CONVERT (varchar, SERVERPROPERTY ('ProductVersion')),2), '.', '')))
IF (@version=10 and (select convert(varchar,SERVERPROPERTY('Edition'))) not like '%Standard%') OR (@version=11)
BEGIN
	select @LogFilePath=log_file_path from sys.server_file_audits
	where name='Audit Login Changes'
	IF @LogFilePath IS NOT NULL
	BEGIN
		set @LogFilePath=@LogFilePath +'*.sqlaudit'
		insert into @AuditLog
		SELECT session_server_principal_name,database_name,object_name,m.class_type_desc,statement,event_time FROM sys.fn_get_audit_file (@LogFilePath,default,default) f
		INNER JOIN sys.dm_audit_class_type_map m on m.class_type=f.class_type;
		select serverproperty('Servername') as ServerName,LoginName,DatabaseName,ObjectName,ObjectType,replace(DDLCommand,'''','''''') as DDLCommand,EventTime from @AuditLog
		where EventTime>dateadd(dd,-1,getdate())
	END
END
ELSE
BEGIN
	IF exists (select name from msdb.sys.tables where name='SecurityChangeLog')
		select serverproperty('Servername') as ServerName,LoginName,DatabaseName,ObjectName,ObjectType,replace(DDLCommand,'''','''''') as DDLCommand,EventTime from msdb.dbo.SecurityChangeLog
	    where EventTime>dateadd(dd,-1,getdate()) and DDLCommand<>''
END

Report

To make it easy for the customer to retrieve the data, I’ve created this report.

Security ChangesI know that this solution is not really secure because people can just stop the audit, disable the queue or delete the records from the tables but I think this is a good start to track the login and user modifications.

Have fun!

It’s hard to convince clients to use SQL 2014

SQL Server 2014 has been released since the 1st of April and there are some really cool features that might help you and your organization and can definitely justify an upgrade. In my job as a SQL consultant, I have a lot of different customers and none of them is currently working on SQL Server 2014 or is planning to upgrade to this new version. I find it really hard to convince my clients to upgrade or even install it for a brand new installation.

Why is it so hard to convince people to start using SQL Server 2014…? After questioning my customers I figured out these 5 reasons?

I don’t trust it

This is probably the most famous argument of my customers. They don’t trust the new version because it has just been released and has probably a lot of bugs. Let’s wait till the first service pack has been released, and then we know it is stable… Most of my clients are telling me that there are just too many risks to install an RTM version of SQL Server. “What happens if something happens and we’re not prepared for it?” I don’t think Microsoft will release a version of SQL Server that is not stable or not fully tested. It’s true that in the beginning of a new release, more bugs are found, but I wouldn’t state that the new release is unstable. Microsoft is still releasing Service Packs and Cumulative Updates for older versions of SQL Server. Does this mean that this software is unstable? Well.., no…, because you’re probably using it right now. Bug fixes are always part of the software despite if it’s a new or old version.

Oh, is there a new release?

I’ve noticed that many people don’t know that a new version has been released. They just install the older version of which they know of. Or they have heard about a new release but are not aware of all the new cool functionalities that can be a benefit for their company. And that is where we, the SQL Family, come in. We are trying to make people aware of all these new features with SQL events, webinars, workshops, blogs,…

My vendor doesn’t support it yet…

Another reason that I hear a lot is that the vendor of a software package (which is using a SQL database), does not support the new version yet. I even had to install a brand new SQL Server 2005 last month because the vendor didn’t support another version… I told my customer to get rid of that software and find a good alternative. The vendor’s software will probably work on SQL Server 2014 but if you should use it, you will lose your full support on the software… Try the upgrade advisor and see if there is some exotic configuration that will not be working anymore. I doubt it… I usually send the report to my customer and the supplier. And I’ve noticed lately that suppliers are giving me a good response that they will support SQL Server 2014 in a next major release.

It’s too expensive

Upgrading to a new version of SQL Server is a project that requires some investments. Not only the license cost but also new hardware, re-coding your application, testing … Many customers are complaining that the enterprise edition has become really expensive but do they really need the enterprise edition? From SQL Server 2008 onwards, you can use the DMV sys.dm_db_persisted_sku_features to view all enterprise edition-specific features that are enabled in the current database. If they do need the enterprise edition, it might worth starting a consolidation project instead of and upgrade project to save license costs.

The upgrade can also save you money because new features can save resources like page and row compression, resource governor or the new T-SQL improvements can save your developers a lot of time.

No time, too much work

I hear this a lot… “Upgrade…? Sorry but we don’t have time for that… Why should we upgrade, the database is working just fine and we have other more important things on our agenda.” I don’t think that it has something to do with “no time”. Some people are just not motivated to do an upgrade. It’s hard to convince people that are not willing to learn and are not open to new ideas. And maybe that’s because of the above mentioned point, they don’t trust the software and it will cost the company a lot of money.

I don’t know if I’m the only one that is experiencing this is :) ? But I hope that I can change the perception of my clients … but it’s a tough job… believe me…

How to change the database owner for the secondary replica database

When you perform a restore of a database to a SQL Server Instance where the database doesn’t exist, the owner of the database is changed to the user that is executing the restore. Actually, this also the case for attaching a database. The ownership can give the login extra permissions which in some cases you don’t want. A good rule of thumb is to choose “sa” as the database owner. This can be easily done by executing the following query after the restore is done.

ALTER AUTHORIZATION ON DATABASE::<YourDatabaseName> to sa;

Just keep in mind, when you change the owner of the database that the user that owned the database might have less or no permission anymore on the database.

Unfortunately, if you restore a database as read-only or if you leave it into restoring mode for AlwaysOn Availability Group configuration, you cannot change database owner anymore.
Even if you setup an AlwaysOn Avaliability Group with the wizard of the SQL Server management studio, the owner of the database on the secondary replica will be the user that is executing the wizard. Quite obvious because in the background the wizard is performing the restore J.
Your database must be in a read/write mode before you can change the database owner.

If the database is read-only, it can briefly be set to Read/Write for the modification, and then set back. That shouldn’t be an issue.
In case of AlwaysOn Availability Groups, you will have to perform a failover to the secondary replica, change the database owner and failback to the original primary replica.
But, I can imagine that there will be situations where a failover will not be possible, like for example asynchronous commit mode or just the business that doesn’t want downtime.

A possible workaround could be to manually prepare the secondary database for the availability group but perform the restore as the user sa. The following script can be used

EXECUTE AS login='sa'
RESTORE DATABASE [YourDatabaseName] FROM DISK = '<YourBackupPath>' 
REVERT
GO

More information on Manually Prepare a Secondary Database for an Availability Group can be found here

It’s a simple script but it does change the owner of the database to sa and you don’t have to worry about a failover to set the correct database owner.

Pieter

Test your scripts against different database states

Probably you all have created some scripts that were just working fine on your instance but they didn’t work on production. I’ve seen a lot scripts that were running just fine as long as all the databases have a status ONLINE. But as soon as a database has another state, the scripts start to fail… The possible database states are:

    • ONLINE
    • OFFLINE
    • RESTORING
    • RECOVERING
    • RECOVERY PENDING
    • SUSPECT
    • EMERGENCY

More information on the database states can be found on http://technet.microsoft.com/en-us/library/ms190442.aspx. You should handle the different database states in your scripts and test them against all different database states. Now… the challenge is to force your database in a particular state. In this blog post, I’m going to explain how you can create test databases with a different state.

ONLINE

That’s the easy part. When you create a new database, it will automatically become ONLINE

CREATE DATABASE ONLINEDB

OFFLINE

When you want to create an OFFLINE database you first have to create the database and run the ALTER DATABASE statement to bring it OFFLINE.

CREATE DATABASE OFFLINEDB
GO
ALTER DATABASE OFFLINEDB SET OFFLINE
GO

You can also bring a database offline with the GUI of the SQL Server Management Studio.

RESTORING

To create a database in RESTORING state, create a new database, take a FULL BACKUP of the database and RESTORE the database with the option NORECOVERY. Your database will not come ONLINE but it will stay in RESTORING mode.

CREATE DATABASE RESTORINGDB
GO
BACKUP DATABASE RESTORINGDB TO DISK=<YourBackupPath>RESTORINGDB.bak' WITH COPY_ONLY
GO
RESTORE DATABASE RESTORINGDB
FROM DISK='<YourBackupPath>RESTORINGDB.bak'
WITH REPLACE,NORECOVERY
GO

RECOVERING

This is a temporary state. Normally, a database is only “In Recovery” mode during startup. So, when SQL Server starts up the database or when you bring it back ONLINE from an OFFLINE state. I haven’t found a possible method to simulate a database that stays in RECOVERING mode…

RECOVERY PENDING

Follow these steps to create a database with a RECOVERY PENDING state

  • Create an empty database
  • Stop the SQL Server Service
  • Rename or remove the database transaction log file of the database
  • Restart the SQL Server Service

You will notice that the database will now have a Recovery Pending state

SUSPECT

I followed this blog post of Paul Randal (blog | twitter) to create a SUSPECT database. However, my database always turned in Recovery Pending state and not SUSPECT… In the blog post, Paul is performing and update in an uncommitted transaction. I changed it into an insert of an extra colleague (myself ;)) and stopped the instance. I opened the log file with the same XVI32 tool and searched for my name and changed that section. When I restarted the SQL Server instance, the engine was unable to perform a rollback of my transaction and the database finally got into a SUSPECT state. Paul describes in this article the differences between SUSPECT and RECOVERY PENDING.

EMERGENCY

When you want to create a database that needs to go into EMERGENCY mode, you first have to create the database and run the ALTER DATABASE statement.

CREATE DATABASE EMERGENCYDB
GO
ALTER DATABASE [EMERGENCYDB] SET EMERGENCY
GO

OTHER DATABASE OPTIONS

Apart from the above mentioned states, I would also recommend to create the following test databases.

  • Read Only –  ALTER DATABASE … SET READ_ONLY
  • Restricted User – ALTER DATABASE … SET RESTRICTED_USER
  • Single user – ALTER DATABASE … SET SINGLE_USER
  • A database with name DB_EX@T!ç-$&ù!&.,.;=+#

Well, start testing your scripts and see how many of them could be improved.

Moving transaction log files

One of my customers took my advice and finally wanted to move the transaction log files to a separate disk. Probably, the current disk space issues on that server helped here a little bit as well 😉
The best way to move database and log files to a new location is by specifying the new file location in the FILENAME clause of the ALTER DATABASE statement.
To move a file of a database, follow these steps:

  1. Set the databases offline
  2. Move the file or files to the new location
  3. Alter the database for each file that has been moved
  4. Bring back the database online

This method works really well for a single database but if you have to repeat it twice, you can automate it.
And in my case, the customer wanted to move the transaction log files for all the databases (230) in 1 time.

The first part of the script will collect the current physical paths and file names of the transaction log files and will store it into a table variable.
Don’t forget to specify the parameter @NewPath which should contain the new path for your transaction log files.
Secondly, a loop will go over all the databases that are ONLINE and will execute the 4 actions as described above. With the “ONLINE” option, you avoid bringing OFFLINE databases back online.
If the move was successful, the script will delete the old transaction log file. I’m using xp_cmshell to perform the COPY and the DEL (I know…). If you should know any better solution, just drop me a comment!
Keep also in mind that this script only works within the same SQL Server Instance.

SET NOCOUNT ON
GO
declare @DatabaseFileLocation as table  (
[Databaseid] int,
[DBName] sysname,
[FileType] int,
[name] varchar(256),
[physical_name] varchar(1024)
)

DECLARE @sqlcmd nvarchar(1024), @Newpath nvarchar(1024),@ID int

--Specify the new transaction log file folder
set @Newpath='E:'
set @ID=4

--Collect the current physical name of the transaction log file
SET @sqlcmd = N'use [?]
select DB_ID(), DB_NAME(), type, name, physical_name from sys.database_files where type=1'

Insert into @DatabaseFileLocation
exec sp_MSForEachDB @sqlcmd

while 0=0 
begin
	select @ID=min(DATABASE_ID) from sys.databases where database_id&gt;@ID and state_desc='ONLINE'
	if @@rowcount=0 or @ID is null break
	begin
		--Set database OFFLINE
		select @sqlcmd= N'ALTER DATABASE [' + name + '] SET OFFLINE WITH ROLLBACK IMMEDIATE' from sys.databases
		where database_id=@ID
		print @sqlcmd
		exec sp_executesql @sqlcmd
		--COPY the transaction log file to the the new path
		SELECT @sqlcmd= 'COPY ' + physical_name + ' ' + REPLACE([physical_name],SUBSTRING(physical_name,0,LEN(physical_name)-CHARINDEX('&#092;',REVERSE(physical_name))+1),@Newpath) from @DatabaseFileLocation where Databaseid=@ID
		print @sqlcmd
		exec master..xp_cmdshell @sqlcmd
		--ALTER the database so that it will point to the new transaction log file location
		select @sqlcmd='ALTER DATABASE [' + DBName + '] MODIFY FILE ( NAME = ['+ name + '], FILENAME = ''' + REPLACE([physical_name],SUBSTRING(physical_name,0,LEN(physical_name)-CHARINDEX('&#092;',REVERSE(physical_name))+1),@Newpath) + ''')' from @DatabaseFileLocation where Databaseid=@ID
		print @sqlcmd
		exec sp_executesql @sqlcmd
		--Set database ONLINE
		select @sqlcmd= N'ALTER DATABASE [' + name + '] SET ONLINE' from sys.databases
		where database_id=@ID
		print @sqlcmd
		exec sp_executesql @sqlcmd
		--If all went well, remove the old transaction log files
		SELECT @sqlcmd= 'DEL ' + physical_name  from @DatabaseFileLocation where Databaseid=@ID
		print @sqlcmd
		exec master..xp_cmdshell @sqlcmd
	end
end

DBA Morning Checklist with PBM (Part 3)

A few months ago, I blogged about how I did my morning checklist by using Policy Based Management together with a Central Management Server. This method works really well for me and I’ve implemented it at all my customers for the moment. When the months passed by, I’ve added extra checks to my list and created the necessary policies for it. This blog post gives a short description how my full morning DBA checklist looks like.

All jobs succeeded

Are there any jobs that have failed the last 7 days? I’m using 7 days here because I visit my customers mostly on a weekly bases. You can of course change that period in the condition that is checked in the policy.

Check SQL Server services are running

This policy will check if all SQL Server services are still running. Sounds ridiculous but it happened to me a few times that the SQL Server Agent was stopped without anybody noticing. This check only works from SQL Server 2008 R2 SP1 onwards because I’m using the DMV sys.dm_server_services, which is available since that version.

Data and Log files on separate volumes

For best performance, your data and log files should be stored on separate volumes. In the condition, I’m just checking the @DataAndLogFilesOnSeparateLogicalVolumes parameter in the Database Performance facet.

Data File Growth in MB

The autogrowth setting of your data file should be set MB instead of %. I’m also verifying if the growth setting is at least 128 MB…

Database Auto Close

You should never set the auto close parameter to True on your database. This option will cause performance degradation because of the overhead of opening and closing the database after each connection. The policy will check if there is any database on the server that has auto close set to True.

Database Auto Create Statistics

This policy will check if the Auto Create Statistics database setting is set to True. In some cases, like on some SharePoint databases, you have to ignore this check because the statistics are handled by SharePoint.

Database Auto Shrink

This rule checks whether the auto shrink database option is set to False. This setting should never be set to True because it will lead to physical fragmentation.

Database Auto Update Statistics

This policy will check if the Auto Update Statistics database setting is set to True. In some cases, like on some SharePoint databases, you have to ignore this check because the statistics are handled by SharePoint.

Database Collation

Is the database collation the same as the collation of my master database? The condition that is checked is the @CollationMatchesModelOrMaster parameter in de Database Performance facet.

Database Compatibility Level

Database compatibility level should be equal to the server level. The condition is going to check the @CompatibilityLevel database parameter against the collation of the master database that I retrieve by querying the sys.databases.

Database DBCC CHECKDB last 7 days

Is a DBCC CHECKDB performed the last week on the databases? I’m choosing 7 days here because this is how I schedule the DBCC command at my customers. The condition is checked based on a T-SQL query that is using the DBCC DBINFO() information.

Database is in Normal State

The status of the database should be in a normal state. This policy will indicate a failure on offline, suspect, … status of the database

Database owner

This policy will check if the owner of my database is set to sa.

Database page verify

Database option Page Verify should be set to CHECKSUM.

Free disk space

This policy is going to check if there is any LUN on the server that has less than 10% free disk space. The check itself is done by a powershell command or if powershell is not installed on the server by the fsutil command. The script also supports mount points.

Full or Differential Backup

Is there a full backup taken of my database during the last 24 hours? Or… Is there a differential backup taken of my database during the last 24 hours and a full backup during the last 7 days. Some of the database are too big to take a full backup every day. This check is going to verify is there is a differential backup. I’ve planned a full backup every week, next to the differential backups. You can change the condition of course, according to you backup strategy.

Jobs owned by sa

Are my jobs owned by the sa user? In the condition, I’ve filtered on the category Jobs from MSX, Database Maintenance, Full-Text and Uncategorized. There is no server policy created for this, so I’m using a T-sql script based on the sysjobs table.

Log File Growth in MB

The autogrowth setting of your log file should be set MB instead of %. I’m also verifying if the growth setting is at least 128 MB and not 1 MB in some cases…

Recovery Model = FULL for Production Databases

The policy is going to check if the recovery model of production databases is set to full. By default, I’m always using a full recovery model for production databases unless the application owner tells me that simple should be fine. I had to find a way to specify if a database exists on a production server or not? I’ve used a simple trick by specifying an Extended Property “Environment” on the master database.

In the policy, I’ve specified a server restriction based on the extended property.

Severity Alerts

Were there any severity Alerts in the SQL Server Log with a value larger than 16 during the past 7 days? When your SQL Server Log file is really big, it might happen that this policy returns a timeout.

Transaction Log Backup Taken

Is there a transaction log backup taken of my database during the last 2 hours? Again, you can change this time period depending on the backup strategy that you’re using.

All these checks give a me nice overview how healthy my SQL environment is, in no time!

You can download all my policies here

In case you should have any issues or you want to share some extra policies, just let me know!

Pieter

Remove SQL Backup Files from your Azure Storage

Microsoft introduced a lot of new Azure features in SQL Server 2014, which has been released this week. It has become really easy to store your backup files on Windows Azure (or should I say Microsoft Azure J?).
You can check my previous blog post on how you can take a backup to Windows Azure. This all works really great and yesterday I also introduced this new feature to one of my customers and they wanted to go ahead with it! While I was driving home, I was thinking how I could implement this in a good way. My customer didn’t want to use Smart Backup, because they wanted to be in control of the backup process. Scheduling the backups with the SQL Server Agent looks like the appropriate solution here, just like you take backups to disk, but you choose to URL.

When you take backups to disk, you also have to schedule a cleanup job for the old backup files. The same counts for the Azure storage, unless you want to pay a lot of money for your storage… J
But how can I clean up the backups from my Azure Storage container? I decided to use PowerShell to clean up the backup files.

First of all you have to install the Azure PowerShell cmdlets. More information can be found here. Once the cmdlets are installed, don’t forget to connect to your subscription as mentioned. Execute the Get-AzureSubscription Command in the Windows PowerShell ISE.

Login to your Azure portal and copy the access key of your storage account because you will need it in the script where it says “Paste your storage access key here“.

In my script, I’m using the Get-AzureStorageBlob to get the blobs out of my backup container. I’m going to check the LastModified date for each blob with extension *.bak and *.log. If it’s less than my cleanup time (3 days for full backups and 1 day for transaction log backups) I’m removing the backup file with the Remove-AzureStorageBlob cmdlet.

$CleanupTime = [DateTime]::UtcNow.AddHours(-72)
$context = New-AzureStorageContext -StorageAccountName koherastorage -StorageAccountKey {Paste your storage access key here}
Get-AzureStorageBlob -Container "sqlbackup" -Context $context | 
Where-Object { $_.LastModified.UtcDateTime -lt $CleanupTime -and $_.BlobType -eq "PageBlob" -and $_.Name -like "*.bak"} |
Remove-AzureStorageBlob
$CleanupTime = [DateTime]::UtcNow.AddHours(-24)
Get-AzureStorageBlob -Container "sqlbackup" -Context $context | 
Where-Object { $_.LastModified.UtcDateTime -lt $CleanupTime -and $_.BlobType -eq "PageBlob" -and $_.Name -like "*.log"} |
Remove-AzureStorageBlob

 You can easily schedule it this with a SQL Server Agent Job. Just choose Powershell in the job step.

 I know this script can be fine-tuned and any suggestion is appreciated.

 Pieter

Faster migration of your SQL 2005 databases

Many DBA’s are trying to get rid of the old SQL Server 2005, which is a good thing.
I’ve been working on some consolidation and migration projects and I’ve found a new and faster way to migrate SQL 2005 databases.

Most easy way to migrate a database is to backup the database, copy it to the new server and restore it. But, how can you make this process faster? Check out this blog post of Jes Schultz Borland (blog | Twitter). Compression of the backup file can make the migration go faster. The backup will be faster because SQL Server has to write less to disk. The copy will be faster because the backup files will be much smaller and the restore will be faster because SQL Server has to read less from the disk. However, there is one little problem… SQL Server 2005 doesn’t support backup compression. This is available from SQL Server 2008 Enterprise Edition or from SQL Server 2008 R2 Standard onwards.

But now, there is a way to compress your SQL 2005 backup files, without using any third party tool. Since a few months, Microsoft introduced a tool called “SQL Server Backup to Azure Tool”. They created it to have a consistent backup solution. In SQL Server 2014 the backup to Azure, encryption and compression are embedded in the engine, but these features are not available in older versions of SQL Server (except for compression of course). The Backup to Azure Tool allows you to use the same functionalities for older versions of SQL Server, starting from SQL 2005 onwards. The good thing about the tool is that you don’t need to use Azure functionality to use encryption or compression. This means that we can compress our on premise SQL 2005 databases and make our migration faster!

The tool can be downloaded from http://www.microsoft.com/en-us/download/details.aspx?id=40740 and is pretty straight forward to install. This tool is supported on SQL Server 2005 or later, and Operating System versions: Windows Server 2008 or later for Servers, and Windows 7 or later for Client Operating Systems.

Once you start up the tool you have to configure some rules.

Click the Add button and start by specifying the path where you are going to save you backup files and the file name pattern:

I don’t want to use the Azure functionality because I’m using the tool for migration purposes to another on premise server.
Select Use local Storage

Select the option “Enable compression”.

Now you have setup the rule that will be necessary to compress local backups.

I did a few tests to see how well the Backup To Azure Tool will compress my backup. I’ve used a 130 GB database and I did the tests on my laptop (16GB RAM and 8 CPU’s).
I’m using SQL Server 2012 for my tests because I wanted to compare the difference between normal compression and Backup To Azure Tool compression

Test Duration MB/sec Size Backup File CPU usage
Full backup without compression and without Backup to Azure Tool compression 2696 sec. 34 MB/sec 89.3 GB 4 %
Full backup with compression and without Backup to Azure Tool compression 1502 sec 61 MB/sec 16.9 GB 10%
Full backup without compression and with Backup To Azure tool compression 2344 sec 39 MB/sec 13.5 GB 20%
Full backup with compression and with Backup To Azure tool compression 1403 sec 65 MB/sec 16.3 GB 15%

I noticed that the size of the backup file is smaller when I’m using the Backup to Azure Tool compared to the normal backup compression.
On the other hand, the duration is taking almost the same time as a backup without compression. Also, the CPU is 10% higher than a normal backup compression.
But with these results, it will still makes sense to use the tool for faster migrations. You still have to copy less data and the restore will also be much faster because there is less data that has to be read from disk.

Happy migration!

Pieter

Speaking at SQL Saturday Exeter #269

At the end of last year, I had my first SQL Saturday experience. I presented my first international session on SQL Saturday in Slovenia. It was really great, so I decided to submit other sessions for other SQL Saturdays. I really like the British people so I submitted a few sessions for SQL Saturday Exeter. And guess what, one of my sessions got selected! I’m really looking forward to meet the British SQL people and the other international speakers that I have met before like Jen Stirrup, Stacia Misner, William Durkin, André Kamman, Mladen Prajdic and Matija Lah. You can find my session abstract below.

 

1, 2, 3, … wait… 1, 2, 3, … many… SQL Servers

Quote: “A good DBA is a lazy DBA”. When you have to manage a few SQL Servers, it’s not that hard to be a lazy DBA and to automate your day to day activities. But when you have to manage over 100 instances, it can become a tough job. Just imagine how you will deploy a new maintenance job on all your instances? Are you going to connect to each server one by one? Not really… Microsoft provides an enhanced set of multi-server management tools. The purpose of this sessions is to point out which SQL Server tools you can use for multi-server management. The session is bulk loaded with demos and it will give you a good idea what features can be helpful in your environment.

More information about the event can be found on http://www.sqlsaturday.com/269/eventhome.aspx

Hope to see you all there!

Visualize your backup schedules

Recently I was asked by a customer to provide the database backup timings for every SQL Server (only the full backups).
Originally, all the backups were taken at 19:00 on a SAN disk that was attach to each server directly. Once the backup was taken, it is then pushed to tape every night. But recently the customer changed strategy and all backups had to be put on a CIFS share of a NetApp. The backup location was changed but not the schedule. As a result, some servers were having problem with the backups. All servers were taking their backups at the same time and were transferred over the network to the same network share. The network load increased a lot J

So, I had to come up with a solution to get the start time and the duration of each backup job of each server. With that information, I could reschedule the jobs on each server so that the network load was spread during the night.

This blog post of Brad Schulz (Blog) helped me to get the information out of the sysjobs tables. I’ve changed the script a bit so that I got the average backup duration of each server. I’m using a Central Management Server so it was easy to get my information in one time.

SET NOCOUNT ON; 
declare @HistoryStartDate datetime,
		@HistoryEndDate datetime

set @HistoryStartDate=’19000101’
set	@HistoryEndDate = getdate();

with JobHistData as(  
	select j.name,
			date_executed=convert(datetime,convert(varchar(8),h.run_date)) +’ ‘+stuff(stuff(right(1000000+h.run_time,6),3,0,’:’),6,0,’:’),
			jobsecsduration=h.run_duration/10000*3600+h.run_duration%10000/100*60+h.run_duration%100,
			jobtimeduration=stuff(stuff(right(1000000+h.run_duration,6),3,0,’:’),6,0,’:’),
			jobstarttime=stuff(stuff(right(1000000+s.active_start_time,6),3,0,’:’),6,0,’:’)
	from msdb.dbo.sysjobhistory  h
	inner join msdb..sysjobs j on h.job_id = j.job_id
	inner join msdb..sysjobschedules js on js.job_id=j.job_id
	inner join msdb..sysschedules s on s.schedule_id=js.schedule_id
	where j.name=’DatabaseBackup – USER_DATABASES – FULL’
	and step_id=0     --Job Outcome
	and run_status=1  --Succeeded
	)

select Name,AvgDuration=convert(int,avg(jobsecsduration*1)),CONVERT(VARCHAR, DATEADD(second,convert(int,avg(jobsecsduration*1)),0),108) as AvgTime,JobStarttime
from JobHistData 
where date_executed>=dateadd(day,datediff(day,’19000101’,@HistoryStartDate),’19000101’)  and date_executed<dateadd(day,1+datediff(day,’19000101’,@HistoryEndDate),’19000101’)
group by Name,jobstarttime

The query gives me the following result (server names are hidden)

Nice overview but not really useful to reschedule the backup jobs. I needed something visual, with timelines, so that I could easily see which backups were running at the same time. I tried several programs but I came up with a solution in… (ow yes) Excel! To be honest, I’m not an expert in Excel 😉

First I paste the result of my query in Excel. Column C and D should have type Time.
I added some extra columns:

  • Current Endtime = Column C + D
  • Number Starttime = Column D but the type is Number
  • Number Endtime = Column E but the type is Number
  • Number Elapsed Time = Column G – F. Column should also have the Number type

On the second sheet add an empty Stacked Bar Chart

Right click on the chart and select “Select Data…”

The Chart data range should be the first column Server Name
Edit the Series:

  • Change the Series name in Starttime
  • Series values will be column D Current Starttime

Add a second Serie

  • Change the Series name in Duration
  • Series values will be column H Number Elapsed Time

Edit the Category

  • Values will be column A Server Name

You should now have a similar configuration

Now that we have all the data in our chart, let’s fix the layout

Right click on the Horizontal Axis and select Format Axis…

On your right, select the AXIS Options and change the following values

  • Bound Minimum    0,791666666666667    This is 19:00, the start time of my backups
  • Bound Maximum    1            This is 00:00
  • Units Major        0,010416666666666    This is every 15 minutes

As a result, the axis should now be a time value from 19:00:00 till 00:00:00 for every 15 minutes.

Right click on the Vertical Axis, go to AXIS Options and select Categories in reverse order. This will change the order of my server list and will put the horizontal axis on top.

The last thing that you need to do is to hide the Starttime series. We are only interested in duration of the backup starting from the start time. In the chart, right click the starttime series (this is normally the blue line) and select Format Data Series Go to the Fill option and select No fill

Now you should get a similar result like this

If you want, you can add some extra design to the chart. I finally came up with the following layout

This really gives me a nice overview how my backups are scheduled.
Setting up my new backup timings will be really easy now. I just have to change the start time on the first sheet.

You can get my Excel example here!

Have fun!

Create an empty AlwaysOn Availability Group

Last week I had to install a 6 node failover cluster at a customer. 4 nodes were used for high availability solution in site A and 2 nodes were in site B for disaster recovery. Each site was hosting 9 SQL Server instances.

On my last day, I had to configure the AlwaysOn Availability Groups (AVG) as disaster recovery solution.  I have chosen the following settings for my AVG’s

  • Asynchronous commit mode
  • Backups on primary replica
  • Readable secondary

There was just one small “problem”. There were no databases to include in the AVG ‘s because this was a complete new platform. I couldn’t use the AlwaysOn Availability Group Wizard because that expects a database. But I wanted to prepare the AVG’s in advance so that the customer only needed to add the databases to the groups at a later stage.
Since I had to install 9 AVG’s I created a script that I could reuse every time I want to create an empty availability group.
The script will create an Availability Group on the primary replica, will add a group listener and will join one secondary replica all in one execution.

You can find the script below but I must mention a few remarks and restrictions

  • You have to execute the script in the SQLCMD mode in the SQL Server Management Studio. To do this go in the main menu to Query –> SQLCMD Mode
  • Only one secondary replica can be used
  • The settings for the replica’s are the one mentioned above
  • Only one IP address can be linked to the group listener
  • Named instances are supported

Before you can execute the script you only need to specify a few parameters:

Parameter Description
PrimaryReplica The primary replica. Default and Named instances are supported eg. ServerInst01
SecondaryReplica The secondary replica. Default and Named instances are supported eg. ServerInst01
EndPointPort Specify the port for the endpoints
AvailabilityGroup The name of the availability Group
Listener The name of the group listener
IPListener The IP address of the listener
ListenerPort Specify the listener port
ListenerSubnet Specify the subnet of the listerner IP Address

You can find my script below. If you should have questions, remarks or suggestions, please do leave a comment so that I can improve it.

Have fun!

--- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE.
:setvar PrimaryReplica Node1 --Specify the primary replica
:setvar SecondaryReplica Node2 --Specify the secondary replica
:setvar EndPointPort 5022 --Specify the port for the endpoints
:setvar AvailabilityGroup AVG_Group --Specify the name of the availability group
:setvar Listener AGL --Specify the name of the group listener
:setvar IPListener 192.168.0.20 --Specify the IP Address of the group listener
:setvar ListenerPort 1433 -- Specify the listener port
:setvar ListenerSubnet 255.255.240.0 --Specify the subnet of the listeren IP Address

:Connect $(PrimaryReplica)

USE [master]
GO
declare @AVGLogin as sysname
declare @CMD nvarchar(max)

select @AVGLogin=service_account from sys.dm_server_services
where servicename like 'SQL Server (%'

if not exists(select name from sys.syslogins where name=@AVGLogin)
begin
 set @CMD=N'CREATE LOGIN [' + @AVGLogin + '] FROM WINDOWS WITH DEFAULT_DATABASE=[master], DEFAULT_LANGUAGE=[us_english]'
 print 'Create service account as server login...'
 exec sp_executesql @CMD
end

if not exists (select name from sys.endpoints where name='Hadr_endpoint')
begin
print 'Create HA/DR endpoint...'
CREATE ENDPOINT [Hadr_endpoint] 
 AS TCP (LISTENER_PORT = $(EndPointPort))
 FOR DATA_MIRRORING (ROLE = ALL, ENCRYPTION = REQUIRED ALGORITHM AES)
end

IF (SELECT state FROM sys.endpoints WHERE name = N'Hadr_endpoint') <> 0
BEGIN
 ALTER ENDPOINT [Hadr_endpoint] STATE = STARTED
END


set @CMD=N'GRANT CONNECT ON ENDPOINT::[Hadr_endpoint] TO [' + @AVGLogin + ']'
print 'GRANT CONNECT ON ENDPOINT...'
exec sp_executesql @CMD

GO

:Connect $(SecondaryReplica)

USE [master]
GO

declare @AVGLogin as sysname
declare @CMD nvarchar(max)

select @AVGLogin=service_account from sys.dm_server_services
where servicename like 'SQL Server (%'

if not exists(select name from syslogins where name=@AVGLogin)
begin
 set @CMD=N'CREATE LOGIN [' + @AVGLogin + '] FROM WINDOWS WITH DEFAULT_DATABASE=[master], DEFAULT_LANGUAGE=[us_english]'
 print 'Create service account as server login...'
 exec sp_executesql @CMD
end

if not exists (select name from sys.endpoints where name='Hadr_endpoint')
begin
print 'Create HA/DR endpoint...'
CREATE ENDPOINT [Hadr_endpoint] 
 AS TCP (LISTENER_PORT = $(EndPointPort))
 FOR DATA_MIRRORING (ROLE = ALL, ENCRYPTION = REQUIRED ALGORITHM AES)
end

IF (SELECT state FROM sys.endpoints WHERE name = N'Hadr_endpoint') <> 0
BEGIN
 ALTER ENDPOINT [Hadr_endpoint] STATE = STARTED
END

set @CMD=N'GRANT CONNECT ON ENDPOINT::[Hadr_endpoint] TO [' + @AVGLogin + ']'
print 'GRANT CONNECT ON ENDPOINT...'
exec sp_executesql @CMD

GO

:Connect $(PrimaryReplica)

IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='AlwaysOn_health')
BEGIN
 ALTER EVENT SESSION [AlwaysOn_health] ON SERVER WITH (STARTUP_STATE=ON);
END
IF NOT EXISTS(SELECT * FROM sys.dm_xe_sessions WHERE name='AlwaysOn_health')
BEGIN
 ALTER EVENT SESSION [AlwaysOn_health] ON SERVER STATE=START;
END

GO

:Connect $(SecondaryReplica)

IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='AlwaysOn_health')
BEGIN
 ALTER EVENT SESSION [AlwaysOn_health] ON SERVER WITH (STARTUP_STATE=ON);
END
IF NOT EXISTS(SELECT * FROM sys.dm_xe_sessions WHERE name='AlwaysOn_health')
BEGIN
 ALTER EVENT SESSION [AlwaysOn_health] ON SERVER STATE=START;
END

GO

:Connect $(PrimaryReplica)

USE [master]
GO

declare @SystemAddressPrimary nvarchar(256)
declare @SystemAddressSecondary nvarchar(256)

if charindex('','$(PrimaryReplica)')=0
 set @SystemAddressPrimary='$(PrimaryReplica)'
else
 select @SystemAddressPrimary=substring('$(PrimaryReplica)',0,charindex('','$(PrimaryReplica)'))

if charindex('','$(SecondaryReplica)')=0
 set @SystemAddressSecondary='$(SecondaryReplica)'
else
 select @SystemAddressSecondary=substring('$(SecondaryReplica)',0,charindex('','$(SecondaryReplica)'))

declare @CMD nvarchar(max)
set @CMD='
CREATE AVAILABILITY GROUP [$(AvailabilityGroup)]
WITH (AUTOMATED_BACKUP_PREFERENCE = PRIMARY)
FOR 
REPLICA ON N''$(PrimaryReplica)'' WITH (ENDPOINT_URL = N''TCP://' + @SystemAddressPrimary + ':$(EndPointPort)'', FAILOVER_MODE = AUTOMATIC, AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, BACKUP_PRIORITY = 50, SECONDARY_ROLE(ALLOW_CONNECTIONS = READ_ONLY)),
 N''$(SecondaryReplica)'' WITH (ENDPOINT_URL = N''TCP://' + @SystemAddressSecondary + ':$(EndPointPort)'', FAILOVER_MODE = AUTOMATIC, AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, BACKUP_PRIORITY = 50, SECONDARY_ROLE(ALLOW_CONNECTIONS = READ_ONLY));'

print 'Create the availability group...'
exec sp_executesql @CMD

print 'Create the availability group listener...'
ALTER AVAILABILITY GROUP [$(AvailabilityGroup)]
ADD LISTENER N'$(Listener)' (
WITH IP
((N'$(IPListener)', N'$(ListenerSubnet)')
)
, PORT=$(ListenerPort));

GO

:Connect $(SecondaryReplica)

ALTER AVAILABILITY GROUP [$(AvailabilityGroup)] JOIN;

GO

Adding a crappy backup network to your cluster

Last week, I got a really nice challenge of a customer.
They wanted to add a backup network in their 2 node cluster.
No problem, that’s pretty straight forward but… they mentioned that the backup network is crappy and unstable…
Despite that the network was unstable, they still want it to use it.

The tricky part here was that the cluster could not failover in case of a network failure of the backup LAN.
On the other hand, SQL Server must be dependent on the backup IP address if you want to use it.

This is how I worked out my solution
First of all, I added the backup IP address in my cluster configuration. I added it under the SQL Server Network Name.

At this moment, the SQL Server Network Name is now dependent on both IP addresses.
The SQL Server resource is dependent on the SQL Server Network Name, which means that it will listen on both IP addresses.
I’ve verified it by restarting the SQL Server and checked the SQL Server Log after the reboot.

So far so good. But with this configuration, a failover will occur when the backup network will go offline.
SQL Server will go offline because it’s dependent on the SQL Server Network Name, which is also offline because it is dependent on both IP addresses.
To fix this problem, I changed the dependencies of the SQL Server Network Name. Instead of being dependent on both IP addresses, only one IP address should be up and running.
I did this by changing the dependencies from AND into OR.

With this configuration, the SQL Server Network Name will stay online if the backup network goes offline.
But the reverse is also true… if the production LAN should go offline and the backup LAN stays up, the SQL Server Network Name will also stay online.
In this particular case, we want of course that a failover is initiated.

To make the failover possible, I added an extra dependency. The backup IP address must be dependent on the production LAN.

Let’s follow the chain what happens when the production LAN goes offline

backup network will go down because it’s dependent on the production LAN

SQL Server Network Name will go down because both production and backup LANs are down

SQL Server will go down because it’s dependent on the SQL Server Network Name

Failover is initiated!

 

To be sure that no failover will occur in case of restart failure of the backup IP address, I’ve unchecked the option “If restart is unsuccessful, fail over all resources in this Role” in the Policies tab of the backup IP address.

With this configuration the resource will try to restart every 2 minutes for 1 hour. If it is still not possible to restart, it will stay in a failed status.

I’ve tested this configuration really good and it’s working perfectly.

But… the real solution in this case was to get the backup network stable but this configuration is a nice workaround J

My MVP Award Story

3,5 years ago, I started working as a SQL Server DBA Consultant at Kohera. The goal that I had, next to being a good SQL DBA, was becoming a SQL Server MVP.
Many people told me that it was really hard and difficult to become an MVP, but I believed in myself that it was possible. I knew already a lot of SQL Server because of my previous job but I still had to learn a lot.
I began with reading a lot of blog posts, SQL books, forums ,… and took the exams to get Microsoft certified. I also joined the SQL User Group in Belgium to get to know the SQL people and to learn about the interesting stuff they provide in the user group sessions.

However, I was new and unknown in the SQL community world, so I started to figure out how people could get to know me.
I created a Twitter account (@Pieter_Vanhove) and started following the local “SQL guru’s” in Belgium and some of the most famous SQL guys (MCT’s and MVP’s) in the world.
I started participating in discussions on Twitter and replied on some #sqlhelp questions. Every week, slowly, I got a few extra followers. But I knew that tweeting wouldn’t be enough.

I also wanted to share my SQL experience and started this blog. I don’t blog a lot but I try to share as much as possible my SQL experience. I mostly blog interesting stuff that I have noticed in my day to day job or some interesting scripts that I’ve created.

After a few months, I was asked by Microsoft if I wanted to give a Technet Live Meeting. Sure, why not! The goal of this is to provide an online session that people can join on the web. It was actually my first speaker experience, although it was not in front of a live audience.
As a result of these live meetings, I was selected to join the Microsoft Extended Expert Team (MEET). All MEET members are Belgian experts which have been selected for their skills and knowledge in their area of expertise. They all share their knowledge via their blogs and are regular speakers at Microsoft events or User Group meetings.

After one year, I got a question of the SQLUG if I would like to give a session on SQL Server Days. This is *the* SQL event in Belgium every year. Of course I did and I was really excited about it, my first session in front of a live audience! It was great and it tasted like more. So I’ve started speaking at other events like Community Day, 12 hours of SQL Server 2012, SQLUG sessions and Microsoft Discovery day. I tried already a few times to speak at international events like SQLBITS or SQL PASS Summit, but I never got selected… (maybe this year 😉 ) I didn’t gave up and I wanted to see how my chances are to present on a SQL Saturday event. So I submitted a few sessions for SQL Saturday in Slovenia and guess what, I got selected!!

And last but not least, this year I joined the SQL Server Days group to help organizing the biggest SQL Event of the BeNeLux. It was again a really great experience.
I met a lot of nice people from the SQL community (or SQL Family as they call it), like Thomas LaRock (Twitter|Blog), Kevin Kline (Twitter|Blog), Mark Stacey (Twitter|Blog), … and they gave me some really good advice and tips.

A few weeks later, I got nominated for the MVP Award and on the 1st of January 2014 I got and E-mail from Microsoft starting with

Congratulations! We are pleased to present you with the 2014 Microsoft® MVP Award!

I also would like to thank all the people who supported me and believe in me!

Careful with autogrowth!

A few months ago I was invited by a customer to do an audit of their SQL Servers.
One of the issues that I found was the “implementation” of the autogrowth settings, like growth with 1MB or growth in %.
This week, they invited me back to perform the corrective actions on the issues that I have found.
I was manually modifying the autogrowth setting of one of the databases and I made a mistake… Instead of changing the value of MB I accidently changed the % value into 1024. To my surprise the SSMS accepted this value.
I expected that the percentage value couldn’t go above 100%. Wrong guess … J.
My mistake triggered me to perform some testing on the autogrowth setting.

Let’s see if it’s actually grows with 1024%.
I just created a simple DB with an initial data size of 10 MB and change the autogrowth to 1024%. (Don’t mind the autogrowth setting of the log file.)
I’ve done my tests on a SQL 2012 and on a SQL 2014 instance.

After loading some data into my database, my file actually has grown with 1024%!

(10 MB x 1024%) + 10 MB (initial size) = 112,4 MB !

Let’s see if we can push it a little bit further. What can be the maximum value of the percent growth?
Well, it turns out if you enter a large number like 9999999999 in the GUI, it will change it back to the following value 2147483647, which is the maximum value of an integer.
I’ve tried the same with T-SQL statement

ALTER DATABASE [testgrowth] MODIFY FILE ( NAME = N'testgrowth', FILEGROWTH = 9999999999%)

and I’ve got the error
Incorrect syntax near ‘9999999999’.

When I changed the 9999999999 into 2147483647%, my query ended successfully! As soon as you enter one digit higher than 2147483647, independent if you’re using KB, MB, GB, %,… after the number, the SSMS sees it as an “Incorrect syntax”.

So now we know what the limits are for % but are these the same when you choose, autogrowth in MB?
Let’s give it a try. I entered 9999999999 into the MB autogrowth field and the SSMS automatically changed it into 1048576 MB. It looks like the autogrowth in MB is limited to 1 TB and not 2147483647 like the percentage growth. It’s a completely different value.
If I try a T-SQL statement

ALTER DATABASE [testgrowth] MODIFY FILE (NAME = N'testgrowth', FILEGROWTH = 2147483647MB)

(a larger number will give me the incorrect syntax) I got the following error message:

Msg 1842, Level 16, State 1, Line 1

The file size, max size cannot be greater than 2147483647 in units of a page size. The file growth cannot be greater than 2147483647 in units of both page size and percentage.

Let’s see what happens when I do a T-SQL statement that is larger than 1048576MB (GUI limit) and smaller than 2147483647 (T-SQL limit).

ALTER DATABASE [testgrowth] MODIFY FILE (NAME = N'testgrowth', FILEGROWTH = 2097152 MB)

completes successfully.
Hey! Wasn’t that limited to 1TB by the GUI?

Let’s open the GUI again to see how SSMS deals with it. Well… not quite good… J

Conclusion here is that the GUI and the T-SQL statement are using different limitations, which should be corrected.

I’ve opened a connect on https://connect.microsoft.com/SQLServer/feedback/details/812486/gui-and-the-t-sql-statement-are-using-different-limitations-for-autogrowth-in-mb

Be careful when you change the autogrowth settings and when you make mistakes. The results can be quite strange.

Speaking at SQL Saturday Slovenia #274

On the 8th of November, I delivered a session (together with Wesley Backelant) on SQL Server Days about the new AlwaysOn functionalities in SQL 2014. I love to talk about SQL Server but unfortunately I was only able to speak on local Belgian events like, SQL Server Days, Community Day, Microsoft Discovery Day… I tried already a few times to speak at other international events like SQLBITS or SQL PASS Summit, but I never got selected… I didn’t gave up and I wanted to see how my chances are to present on a SQL Saturday event. So I submitted a few sessions for SQL Saturday in Slovenia and guess what, I got selected!! It will be my first international event and I’m really looking forward to meet the Slovenian SQL people and the other speakers. Special thanks go to my current employer Kohera for supporting me!

Harder, Better, Stronger and More Secure
The first part of my session, I’m going to talk about the new functionalities of AlwaysOn in SQL Server 2014 including live demo to add a replica to Windows Azure. The second part will be about the new backup features in SQL 2014 like backup to URL, managed backup and encryption. I will not focus on the basic AlwaysOn and backup capabilities but on the investments in SQL Server 2014 in the hybrid cloud area.

More information about the event can be found on http://www.sqlsaturday.com/274/eventhome.aspx

Hope to see you all there!

How to open multiple SQL files in one SQL Server Management Studio

As a DBA consultant, I have a lot of scripts stored on my laptop. Today I noticed something annoying…
I had to open a lot of my scripts and every time I double clicked on a .sql file, a new management studio opened.
At some point, there were more than 10 management studio’s open, all consuming memory on my laptop…

Isn’t it possible to open all the queries in just one management studio?
Well actually, it is. After searching the internet I found out that there is some registry changing involved.

Open the registry editor (regedit) and search for the following key

HKEY_CLASSES_ROOTssms.sql.11.0ShellOpenCommand

The default key is as follows “C:Program Files (x86)Microsoft SQL Server110ToolsBinnManagementStudiossms.exe” /dde
Please note that I have used a default installation. It might be that your default key is different than this one.

You have to add a parameter “%1” to the default key. %1 indicates the file that you want to open.
Your default key should look similar like this one:

“C:Program Files (x86)Microsoft SQL Server110ToolsBinnManagementStudiossms.exe” “%1” /dde

Normally, when you open 2 SQL files now, they should be opened in the same management studio. But… on my laptop… it didn’t… L

So, I figured out I had to change a second registry key

HKEY_CLASSES_ROOT.sql

The default key was set to sql_auto_file

 

When I searched for that registry key, I saw that the open command key was set to open Notepad.

 

To solve this, I changed the default key of the registry entry .sql into ssms.sql.11.0

 

When you open 2 (or more) SQL files now, they should be opened in the same management studio.

Pieter

Restore a database from Windows Azure with SQL Server 2014 CTP2

In my previous blog post I explained how you could take a backup of a database to Windows Azure Storage with SQL Server 2014 CTP2.
In this blog post I’m going to explain how you can restore your database from Windows Azure.

If your database is still available on your server, right click on the database, select Tasks à Restore à Database

 

Apparently, the management studio knows that a backup has been taken to Windows Azure and it opens the “Connect to Windows Azure Storage” window.

First of all you need to fill in the Storage Account. That will be the one that you have created when taken the backup to Windows Azure.
To know the correct storage account, go to your Windows Azure web site, select Storage and choose the correct storage account.
In my example here, this will be “koherastorage”.

 

You should also fill in the Account Key that is linked to your storage. To access stored content, applications and also the SSMS, use one of two 512-bit storage access keys.
Click on the button in the main storage window. A new window with the manage keys will open.

Copy the primary Access Key on your clipboard and past it in the Account Key field of the “Connect to Windows Azure Storage” windows of the SSMS.
You will notice that the key will be masked with *stars*

Select the correct credential to connect to Windows Azure. You can use the credential that you have created while creating the backup.

Click the Connect button and the restore window will appear as we all know it, like in previous SQL Server editions.

 

Select the appropriate backup sets that you want to restore and click OK to start the restore.
On top, you see that a tail-log backup will be taken. And yes indeed, it’s also stored to my Windows Azure Storage Container!
You should get a similar result like this J

 

As you can see, restoring a backup from Windows Azure is as simple as restoring a database from local disk.
However, there is still one question that keeps me busy. How will I backup or restore my database if I don’t have any internet connection available…?

Have fun!

Pieter

 

 

Backup to Windows Azure with SQL Server 2014 CTP2

Since SQL 2014 is released, you can take a backup of your database to Windows Azure Storage.
CTP2 has recently be released and it is now possible for you as DBA to create a manual backup to Azure in SQL Server Management Studio.
In this blog post, I’m going to show you how easy it is to do this.

Before you can start with creating a backup, you have to create a storage account to store the backup.
Go to your manage page of Windows Azure.

Click on Storage and select New in the left bottom corner

             

 

Now select Data Services à Storage à Quick Create.
Type in a URL name that you want to use and select the location where you want to create the storage

 

When all the data is filled in click

As a result your storage account is created

 

Next step is creating a container to store blob files. Containers provide a logical grouping for blobs stored in the Windows Azure Blob service.
When you upload a backup to the Blob service, you must specify a container.

Click on the storage you have just created and select containers.
Click on the link “Create a Container”

In the new container window, type in the container name and select the appropriate Access type.
I’m choosing Private because I’m the only one that can have access to my backup files.

As a result the container is created. You will see that a new URL for the container is created. We will need that later on in the management studio when taking the backup.

Now that the Azure side is configured. Let’s open the management studio and take a backup of our database.
Right click on the database à Tasks à Backup

 

As you can see, the destination options has been changed a lot. You can choose to take a backup to disk or to URL.
Choose URL to take a backup to Windows Azure.

By default the management studio suggest a file name, but of course this can be changed.

We also have to create a SQL Credential to store the Windows Azure storage authentication information in SQL Server.
Once it is create, you can reuse it.

Click on the button “Create” to create the credential
Specify the credential name.
You have to specify the management certificate, so that you can connect to Windows Azure.
If you don’t have the certificate, you can also connect to Azure by using the publishing profile file.
Go to https://manage.windowsazure.com/publishsettings/Index?client=vs&SchemaVersion=1.0 to download your configuration file and import it into your management studio by clicking the browse button.

Select the storage account to use and click the create button.

 

You should get a similar result as this one. Notice that the URL prefix is now the same as the URL of the container that you just have created.
If it is not the same, past the container URL into the URL prefix field.

 

Click OK and your backup will start and finish successfully.

Let’s go back to my Azure Manage page. When you check the container, you will see that a bak file has been added

Well, that wasn’t that difficult if you ask me J.

Next blog post will be the restore of the database from Windows Azure.

 

Verify Best Practices for Tempdb

As a DBA for many years now, I know that tempdb needs some tuning after installation.
I found these 3 articles on the Confio website that explains the best practices for Tempdb very well

You could check these settings manually but I had to check tempdb configuration for more than 100 servers today.
A good DBA tries to automate as much as possible so I build a script that verifies all the best practices for tempdb.

Just run it against your Central Management Server or on a separate server and you have all the “wrong” tempdb settings in no time!
Please note that the script only works from SQL 2005 onwards.

--Place data file: Data files on own disk
if exists(select substring(physical_name,1,2) from sys.master_files
	where database_id=2
	and substring(physical_name,1,2) IN (select substring(physical_name,1,2) from sys.master_files
	where database_id<>2))
	print 'Data files of Tempdb are not on a seperate drive'

--Place data file: Log file not on same disk as data disk
if exists(select substring(physical_name,1,2) from sys.master_files
	where database_id=2 and type=0
	and substring(physical_name,1,2) IN (select substring(physical_name,1,2) from sys.master_files
	where database_id=2 and type=1))
	print 'Log file of Tempdb is on the same drive as the data file'

--Place data file: Tempdb on the C-drive
if exists(select substring(physical_name,1,2) from sys.master_files
	where database_id=2 and type=0
	and substring(physical_name,1,2) = 'C:')
	print 'Tempdb is on the C-drive'

--Initial Size is set to default 8 MB
if (select sum(size) from sys.master_files where  database_id =2 and type=0)<=1024
	print 'Tempdb initial size is too small'

--Autogrowth is set to percentage
if (select sum(convert(tinyint,is_percent_growth)) from sys.master_files where database_id =2)>0
	print 'Tempdb has configured one or more files as percentage autogrowth'

--Multiple data files according to number of logical processors
if ((select count(1) from master.sys.master_files where type = 0 and database_id = 2)<(select cpu_count from sys.dm_os_sys_info)
   and (select cpu_count from sys.dm_os_sys_info)<=8) OR
   ((select count(1) from master.sys.master_files where type = 0 and database_id = 2)<8
	and (select cpu_count from sys.dm_os_sys_info)>8)
		print 'Fewer tempdb data files than logical processors'
ELSE
BEGIN
	IF (select count(1) from master.sys.master_files where type = 0 and database_id = 2)>8
	  print 'Too many tempdb data files according to the number of logical processors'
END

--Verify if all data files have the same size
if (select count(1) from (select  count(1) as counter from master.sys.master_files c where   c.type = 0 and c.database_id = 2 group by c.size) as c)>1
  print 'Different sized tempdb data files detected'

DBA Checklist with Policy Based Management 2.0

In my last blog post I talked about how easy it is to use Policy Based Management for your daily DBA Checklist.
And it gets even better when you use the PBM in combination with a Central Management Server.
I check my customers servers in no time 😀

However, while using it for a few weeks now, I noticed that there were some improvements possible.
The first improvement was the policy where I check the free disk space of the server.
The initial version was going to check if the amount of free space of the volume that contains the data file was more than 10 GB.
See this screenshot below

 

But what about the disks where the log files are stored? They are not checked.
Or what happens if your disk is less than 10 GB like for example the system databases LUN. The policy will always fail.
I had to find a way that could show the free disk space in percentage for every disk in my server.
I’ve checked all the Facets in the PBM but I couldn’t find one that was useful. If you can find one, please let me know.

I had to come up with a script that could do the trick.
In the old days, you had to build a script that was using the xp_fixeddrives stored procedure. Problem is that it this procedure can’t check mount points.
The best way nowadays is use powershell. But then again, some old servers didn’t had powershell installed and the script failed again.
I found some useful scripts on this site http://www.mssqltips.com/sqlservertip/2444/script-to-get-available-and-free-disk-space-for-sql-server/ , combined and tuned a few of them.

SET NOCOUNT ON
Declare 
@Free_Bytes BIGINT,
@TotalBytes BIGINT,
@Drive Char(1),
@sql varchar(1000),
@svrName varchar(255)
set @svrName = convert(varchar(255),SERVERPROPERTY('MachineName'))
set @sql = 'powershell.exe -c "Get-WmiObject -ComputerName ' + QUOTENAME(@svrName,'''') + ' -Class Win32_Volume -Filter ''DriveType = 3'' | select name,capacity,freespace | foreach{$_.name+''|''+$_.capacity+''%''+$_.freespace+''*''}"'
--creating a temporary tables
CREATE TABLE #output (line varchar(255))
CREATE TABLE #DiskSpace ( Drive Char(1), Total_Space Bigint, Free_Space Bigint,Percentage_Free as (Free_Space*100 / Total_Space ) )
CREATE TABLE #Fixeddrives ( Drive Char(1), Free_Space Bigint)
CREATE TABLE #fsutil ( ID INT IDENTITY , Drivedata Varchar(2000))
--inserting disk name, total space and free space value in to temporary table
insert #output
EXEC xp_cmdshell @sql
--select * from #output
if (select count(1) from #output where line like '%powershell.exe%')=0
BEGIN
	insert into #DiskSpace
	select rtrim(ltrim(SUBSTRING(line,1,CHARINDEX('|',line) -3))) as drivename
		  ,round(cast(rtrim(ltrim(SUBSTRING(line,CHARINDEX('|',line)+1,
		  (CHARINDEX('%',line) -1)-CHARINDEX('|',line)) )) as Float),0) as 'TotalSpace'
		  ,round(cast(rtrim(ltrim(SUBSTRING(line,CHARINDEX('%',line)+1,
		  (CHARINDEX('*',line) -1)-CHARINDEX('%',line)) )) as Float),0) as 'FreeSpace'
	from #output
	where line like '[A-Z][:]%' and LEN(line)>6
	order by drivename
END
ELSE
BEGIN
	Insert into #Fixeddrives exec master.dbo.xp_fixeddrives
	insert into #DiskSpace ( Drive)
	select Drive from #Fixeddrives

	DECLARE Drive_cursor CURSOR FOR
	SELECT Drive from #DiskSpace
	OPEN Drive_cursor
	FETCH NEXT FROM Drive_cursor INTO @Drive

	WHILE @@FETCH_STATUS = 0
	BEGIN

	select @sql = 'insert into #fsutil exec master.dbo.xp_cmdshell ''fsutil volume diskfree '+ @Drive+':'''
	exec(@sql)

	select @Free_Bytes = substring (Drivedata, charIndex (':', Drivedata, 0)+ 1 , 1000)
	from #fsutil where Drivedata like '%Total # of free bytes%'

	select @TotalBytes =substring (Drivedata, charIndex (':', Drivedata, 0)+ 1 , 1000)
	from #fsutil where Drivedata like '%Total # of bytes%'

	Update #DiskSpace set Total_Space = @TotalBytes, Free_Space=@Free_Bytes
	WHERE Drive = @Drive

	TRUNCATE TABLE #fsutil
	FETCH NEXT FROM Drive_cursor INTO @Drive

	END
	CLOSE Drive_cursor
	DEALLOCATE Drive_cursor

END

select * from #DiskSpace
order by Drive

--Drop the temporary tables
drop table #output
drop table #DiskSpace
drop table #Fixeddrives
drop table #fsutil

You can use this script in the condition of your policy by using the ExecutSql option

 

Second Issue was the full backup check. Some big databases, like data warehouses, are too big to perform a full backup every day.
Instead of a full backup, a daily differential backup is taken and a full backup is taken every week.
With that backup strategy my full backup policy was failing, so I also had to cover the differential backups in my condition.

 

You can download the new version of my policies from my SkyDrive and import them into your Central Management Server.
If you should have any suggestion on improving the policies, do not hesitate to contact me!

Pieter

DBA Checklist with Policy Based Management

In my job as a SQL Server Database Consultant, I have my “recurring” customers. This means that every week, I’m at the customer for 1 day to do day-to-day DBA tasks.
Since those customers don’t have a DBA, I have to perform the daily DBA checklist on a weekly basis J… Initially, I’ve started with executing some T-SQL scripts to verify the SQL Servers. But as more servers were installed, my DBA checklist became really time consuming.

So, I had to find a way to perform the same checks in a more efficient way. I’ve started playing with the policy based management feature and I’ve created my “DBA Checklist” policies that I’m checking against all the SQL Servers at once.
Less time consuming and it gives you a nice overview of failed checks.

The first step that you need to do is creating a Central Management Server (CMS) and register all your servers under that CMS.
I’m not going into detail on this but you can find more information on http://msdn.microsoft.com/en-us/library/bb895144.aspx

I’ve created the following DBA Checklist

  • Verify if there were any failed SQL Agent Jobs during the last week
  • Verify if all the SQL Server Services are running
  • Verify if all databases are in a “Normal” state (not suspect, restoring, offline, …)
  • Verify free disk space
  • Verify free space in the data files
  • Verify if a full backup was taken of the database in the last 24 hours
  • Verify if a transaction log backup was taken of the database in the last hour
  • Verify if there were errors in the error log with a severity > 17 for the last 7 days

This checklist is pretty basic, I know, but it gives me a good idea if my SQL Servers are fine or not. Feel free to give me any suggestion of checks that I can add to the list.

You can download the policies from my SkyDrive and import them into your Central Management Server.
Please note that the policies will not work against a SQL Server 2000 or lower.

Connect to your CMS – Go to Management – Policy Management – Policies
Right click Policies and select “Import Policy”

 

Select the xml –files you just have downloaded as the “Files to import” and click OK.

 

If all went fine, you should get a result like the one below

 

Now let’s do a check against all my servers.
Right click on your CMS and select “Evaluate Policies”

As a source, select again the CMS server, because that’s the one that where you have just imported the policies

Select the policies that you want to verify. You can easily find my policies because I’ve put them in the category “DBA Checklist”

Now, just click the Evaluate button. As a result, all the selected policies will be checked against the servers that are registered under your CSM.
When the checks are finished, you should see this kind of result

Below, in the target details area you can see which checks are failed. Click on the “View” link to see more details and correct them.
In the end, you want to see all you’re policy checks green 😀 !

If you should have suggestions or you find any improvements, please let me know so that I can adapt them.

Have fun!

 

My DACPAC Experience: How to create a DACPAC and deploy it to SQL Server

I recently started working with DACPAC’s at a customer. The developers wanted to use the Data-tier application feature to perform upgrades op the databases and they wanted my help to implement it.
In short, as MSDN is stating, a data-tier application (DAC) is a logical database management entity that defines all of SQL Server objects like tables, views, and instance objects.
It allows you to create a package of SQL Server objects and deploy that package on a SQL Server.

In this blog post, I’m going to explain how you can create a DACPAC with the SQL Server Data Tools (SSDT) and deploy it on your SQL Server.

How to create a DACPAC

My goal is to create a new database “MyDACPAC” on my SQL Server which contains 1 simple table Person.
The first step, is creating a database project in the SSDT

Now let’s create a simple table “Person”. Right click on you project, go to “Add” and select Table.
Just give your table a name and click add

My table will look like this

Of course, you can add more than just tables, like indexes, views, stored procedures, …
Go to the following link to get an overview of the DAC Support For SQL Server Objects http://msdn.microsoft.com/en-us/library/ee210549.aspx

Let’s imagine that this is the database that I want to create. The only thing that is left is actually creating a DACPAC.
Well, that’s the easy part.
Right click on your project and select Build

Switch to the output window and verify if the build succeeded.
As you can see, the build process created a DACPAC, that can now be deployed on my SQL Server

How to deploy your DACPAC

There are several methods for deploying your DACPAC onto your SQL Server.
Below, I will give you a list of the most common deployment methods I have used.

Deploy Database Wizard

This method will describe how you can deploy your database onto your SQL Server by using the Deploy Database Wizard in the SQL Server Management Studio (SSMS)

Open your SSMS, connect in the Object Browser to the server where you want to create your database.
Right click on “Databases” and select “Deploy Data-tier Application”

 

As soon as the wizard start, you first get of course a Welcome screen. Just click next to go to the next step.
In the following step, you need to select DAC package that you want to deploy. Just go to the path where you SSDT has created the DACPAC.

 

The next step is called Update Configuration, but the only thing you need to do here is specifying the database name. In my case, this will be “MyDACPAC”

 

The next screen will show you a summary, click next and wait until your database has been deployed.

When the deployment is finished, go to the Object Explorer in your SSMS and do a refresh of the Databases. You will notice that your database is indeed created as it was developed in the SSDT.

 

Publish Database with SSDT

A second method that I often used was directly deploying the database from the SSDT. This is the so called, “Publish” of the database. In this example, I’m going to deploy the same database to the same server, but of course, the previous database has been removed.

Go to your database project in the SSDT. Right click on the project and select the option “Publish”

First step is providing the Target server connection and the database name that you want to deploy

 

Select the check box “Register as a Data-tier Application” and simply press the “Publish” button.

As soons as the publication starts, the Data Tools Operations windows pops up and shows the progress of the publication.

Now, if you go back to your SSMS, and perform a refresh again of your databases, you will notice that the database is exactly created as it was developed in the database project.

 

SQLPackage.exe

The last method that I often use is the SQLPackage.exe. It’s a command line tool that allows you to automate the deployment of your DAC packages.
The default path of the tools is C:Program Files (x86)Microsoft SQL Server110DACbinSQLPackage.exe.
In this example, I’m going to deploy the same database to the same server, and once again, the previous database has been removed.

Open a Command Prompt and go to the path that I have just mentioned before

I’ve executed the following command:

sqlpackage.exe /Action:Publish /SourceFile:”c:usersvanhop1documentsvisual studio 2010ProjectsMyDACPACMyDACPACbinDebugMyDACPAC.dacpac” /TargetServerName:W7-009073SQL2012 /TargetDatabaseName:MyDACPAC

/Action – I want to publish my DACPAC
/SourceFile – the path where my DACPAC is stored.
/TargetServerName – The SQL Server where you want to create your database
/TargetDatabaseName – The database name that you want to use

When all goes well, you should get this kind of output

Now, if you go back to your SSMS, and perform a refresh again of your databases, you will notice that the database is exactly created as it was developed in the database project.

I think creating and deploying DACPAS is an extremely useful tool that you can use for database development.

Presenting at Community Day 2013

Yesterday, the Community Team (GillDavyMike) announced that registration for Community Day is now open!
Last year, with 750 registrations, the event was completely sold out. So don’t wait too long to register!

I’m also proud to announce, that for the third time in a row, I’ll be presenting a SQL Server session at the event.

2 years ago I delivered a session on SQL Server 2008 Clustering.
Last year, I also got good comments on my session “I know what you did last summer… on my database”.
This session provided a detailed explanation of SQL Server Audit and the different configurations along with some practical examples.

This year, I’m not going to talk about a SQL Server feature in detail, but about a nice configuration that I have implemented.
It will be a level 300 session about creating a high availability solution for your SQL Server 2012 reporting servers.
I know Microsoft provides a scale-out deployment for high availability, but this is only available in the Enterprise Edition (believe me, really expensive…)
My solution works with standard edition. Do you want to know more? Check out my session abstract here or just come to the event and join my session!

The Belgian community is organizing the 7th edition of Community Day!
Community Day 2013 will take place on June 20th 2013 in Utopolis Mechelen.

More information can be found on http://www.communityday.be

Hope to see you all there!

How to setup SSRS high availability with Standard Edition

Recently I was asked by a customer to build a high availability solution for their SQL reporting environment. The report server database was installed on a SQL Server Failover Cluster. Alright, let’s install a SSRS scale-out deployment. However… my customer didn’t want to buy a SQL Server 2012 Enterprise Edition, so I had to come up with a solution to provide high availability with a standard edition.

When you run 2 or more instances of the SSRS standard version against one SSRS database which is normal in the enterprise version, you will receive a message, that the scale-out deployment is not supported.

I found following blog post http://www.pawlowski.cz/2009/12/sql-server-2005-reporting-services-scale-out-deployment-ssrs-2005-scale-out-deployment-2/ of Pavel Pawlowski (Blog|Twitter) which actually explains what I needed. The idea is to replicate the report server databases from the primary server to the secondary server. A load balancer provides the failover functionality.

The picture below shows you how the setup should be.


In this blog, I’m going to explain how I installed the solution and what issue’s I had during installation and configuration.

Installing first server

On the first server, I have installed the SQL Server 2012 database engine + the SQL Server 2012 Reporting services. I slipstreamed the installation with SP1 CU2, so I’m using the latest patches of SQL Server. When the installation was finished, I just configured the reporting services in a very standard way as a stand-alone report server. The report server databases were created on the same server.

 

 

When the installation and configuration is finished, you have to take a backup of the encryption key. We will need it on the second server.

 

Installing second server

Next step is installing the secondary server. The method will be exactly the same as the first server. Just install the report server as it was a stand-alone server. Also make sure that you create the reporting server database on the server itself. Once the server has been setup, you have to restore the encryption key from the first node to be able to share encrypted data between the 2 servers. To do so

Open the report server configuration manager

Once you’re connected select Encryption Keys

And click the button Restore

Copy the encryption key backup file from node 1 to node 2.
Select the file location and type in the correct password that you have used for the backup.
Press OK to restore.

Drop the foreign key on the Notifications table as it will cause an error when setting up the replication later on.

Setup merge replication publication

After both report servers have been installed, you have to setup a merge replication for the reportserver DB. Pavel mentioned in his blog to use merge replication because it is necessary as users are automatically directed to the nodes by load balancers and we need automatically update all 3 nodes whenever there is any change on any node.

However, in my situation, the load balancer will always send the requests to the first server. In case of failure, it will point to the second node. The reason is that my customer only has a budget to buy one Standard license. So the second server is considered as a hot standby. This setup may lead to some difficulties with Microsoft licensing, but that’s another discussion.

I tried to work out the same configuration but tried to use another publication type.

  • Snapshot publication: not really useful in this situation
  • Transactional publication: I was able to setup the publication and subscription; however as soon as you tried to launch a report on the second server, you got primary key violation errors.
  • Peep-to-peer publication: Is only available in Enterprise Edition, so no option

Conclusion: Merge replication is the only possible publication that you can use here.

To setup the publication follow these steps:

Connect to the first SQL Server and go to Replication – Local Publications.
Right-click on Local Publications and select “New Publication”

Use the primary server as its own Distributor

 

Select the reportserver database. It’s not necessary to publish the reportservertempdb

 

Select the Merge Publication Option

 

Since I’m using 2 new installations with SQL Server 2012, I select the option”SQL Server 2008 or Later”

Next step is choosing the Articles for the publication. As Pavel mentions in his blog, we need to exclude the Keys table, because each instance has its own key stored in the table. Each instance has its own name and creates a new entry in the Keys table. And when there is more than one entry in the table, it is automatically detected as scale-out deployment, which is supported only by the Enterprise version.

The Events and Notifications tables are used for monitoring of Subscriptions and events in SSRS. To support correct subscriptions processing we cannot replicate those tables.

When the scheduled time comes for a scheduler , the SQL Server Agent generates an event by executing the scheduled job. The job inserts a row in the Event table of the ReportServer database. This row serves as an event for the Scheduling and Delivery Processor. The event record will be deleted after the process.

 

 

When you click Next, the wizard returns some Article issues

 

A uniqueidentifier column will be added to a few tables because they don’t contain a uniqueidentifier column. If you have a look at the tables, they are just containing some upgrade history. I have included those tables in my publication.

Strange issue, I have noticed is that, even if your table has a uniqueidentifier column, there is still an extra column added to the original table…

After I finished my tests, it looks like reporting services doesn’t have any problems with that extra column. I don’t know what will be the behavior when you will perform an update of the reporting services in the future. So possibly you could have some upgrade issues.

I don’t need to filter the data so, just click next

 

Create the snapshot immediately as we will add a subscription later on.

 

I have specified the Snapshot Agent security to the SQL Server Agent Account.

 

At the end, we want indeed to create the publication.

 

Complete the wizard by clicking Finish

 

This should be your result after the wizard has finished.

Setup merge replication subscription

Now we are ready to subscribe to the publication of the primary node database.
To do so, connect to the second node. Go to replication – Local Subscriptions.
Right click and select “New Subscriptions”

 

Select the first node, which contains your publication.

 

I have chosen to run the Merge Agent on the subscriber and not on the Distributor

 

Choose the database ReportServer as the subscription database. This is the empty database that was created during the installation.

 

Choose the impersonate process account option for the Subscriber

 

As the replication will be used for a high availability solution, I want the agent schedule to run continuously. The changes on both sides must be replicated as quickly as possible to the other server.

 

Choose to initialize the subscription immediately.

 

Select “Client” as Subscription Type. Pavel mentions in his blog to choose Server, but my situation is a little bit different. I only have one secondary server and all the report requests are sent to the primary node. So, in my case, it’s better to choose client.

 

We want to create the subscription at the end of the wizard

Click finish to end the wizard and to configure the subscription

 

Make sure that the SQL Agent Service account is created as a login

When the replication is working, you should get the following result in the Replication Monitor

 

I had a few issues with the creation of my snapshot, but that was caused by some security issues.

DDL Changes

While I was testing I have noticed that the delete of a subscription gave the following error.

Column name or number of supplied values does not match table definition.

I came to a conclusion that the stored AddSubscriptionToBeingDeleted was giving the error.
An insert was done into the table SubscriptionsBeingDeleted but the number of columns was not correct anymore because of the rowid column that was added for the merge replication. Below you can find the stored procedure that solves this issue. You should execute it on both servers

USE [ReportServer]
GO
ALTER PROCEDURE [dbo].[AddSubscriptionToBeingDeleted]
@SubscriptionID uniqueidentifier
AS
— Delete subscription if it is already in this table
— Delete orphaned subscriptions, based on the age criteria: > 10 minutes
delete from [SubscriptionsBeingDeleted]
where (SubscriptionID = @SubscriptionID) or(DATEDIFF( minute, [CreationDate], GetUtcDate() ) > 10)
— Add subscription being deleted into the DeletedSubscription table
insert into [SubscriptionsBeingDeleted] VALUES(@SubscriptionID, GetUtcDate(),newid())

Now you have a fully operational high-available reporting solution with SQL Server Standard Edition.

The client PC’s are connecting to a network load balancer. This load balancer is always sending the report requests towards the primary report server. The load balancer will point to the secondary server, in case of a failure of the primary server. The secondary server has been setup exactly the same way as the primary node. Between the 2 nodes we I have installed a SQL Server merge replication. This will replicate all the changes between primary and secondary and the other way around. With this configuration we provide a high availability solution.

Have fun with it!

The trace flags I enable by default

After installing a SQL Server Instance, a good DBA starts configuring the SQL Server Instance, or that’s what I hope 😉
But, I notice that a lot of people “forget” to configure trace flags.

Trace flags are often used for testing or diagnostic purposes.
However, there are a few interesting trace flags that could be implemented on most production environments.
Of course, don’t activate them blindly but think twice before you do so…

Trace Flag 1117    All the files in a filegroup will autogrow together by their specified autogrow increment
More information on SQL Server 2008 Trace Flag -T 1117 by Matthias Sessler

Trace Flag 1118    Forces uniform extent allocations instead of mixed page allocations
More information on SQL Server (2005 and 2008) Trace Flag 1118 (-T1118) Usage by Bob Dorr
More information on Misconceptions around TF 1118 by Paul Randal Twitter|Blog

Trace Flag 2371    To control when the query optimizer generates autostats on a table
More information on Controlling Autostat (AUTO_UPDATE_STATISTICS) behavior in SQL Server

Trace Flag 3226    Suppress the backup transaction log entries from the SQL Server Log
More information on The trace flags that are available in SQL Server

Trace Flag 4199    To control multiple query optimizer changes previously made under multiple trace flags
More information on Trace flag 4199 is added to control multiple query optimizer changes previously made under multiple trace flags

Trace Flag 2562 & 2549   To improve the speed of your DBCC CHECKDB.
More information on Improvements for the DBCC CHECKDB command may result in faster performance when you use the PHYSICAL_ONLY option
And on Faster DBCC CHECKDB Released in SQL 2008 R2 SP1 CU4 : Traceflag 2562 & 2549

 

If you want to enable one of those trace flags, have a look at How do I work with Trace Flags? by PankajAgarwal Twitter

Could not load file or assembly ‘Microsoft.VisualStudio.Tools.Applications.Core’

I was migrating SSIS packages from a SQL Server 2008 R2 server to a new SQL Server 2012.
When I loaded a package in the SQL Server Data Tools, I got the following error:

I was only getting this error when I was trying to upgrade a package with a Script Component in it.
When I tried to edit the script I also got the following error message:

Strangely enough, when I opened the same package on my computer, it worked fine.

After searching a while I found a solution to fix this issue.
The key is in the message “Cannot show Visual Studio 2010 Tools for Applications editor

When I looked at the “Program and Features” of my server I saw that the following components were installed

When I compared it to my computer “Program and Features”, I saw that an extra component was installed Microsoft Visual Studio Tools for Applications x86 Runtime 3.0

After my investigation, I found out that the problem is caused by the order in which you install SQL Server Integration Services 2012 and SQL Server Data tools.
So I played around and tested 3 different scenario”s:

  1. First Integration Services installation and secondly run the setup again to install SQL Server Data Tools

Actually, this was my original situation. I only installed Integration Services, but the customer also wanted the SQL Data Tools installed, so I added it afterwards.
When Integration Services was installed I saw that only one component was installed

Now, let”s install the SQL Server Data Tools. According to the installation wizard the Microsoft Visual Studio Tools for Applications 3.0 still needs to be installed (see picture below)

However, after the installation is finished, the Microsoft Visual Studio Tools for Applications Design-Time 3.0 have been added but the Microsoft Visual Studio Tools for Applications x86 Runtime 3.0 is still missing…

  1. First SQL Server Data Tools installation and secondly run the setup again to install Integration Services

I started again from a clean situation and installed first the SQL Server Data Tools. The Microsoft Visual Studio Tools for Applications Design-Time 3.0 and Microsoft Visual Studio Tools for Applications x86 Runtime 3.0 components have been installed.
As you might notice,the x64 Runtime is now missing…

No problem,let”s install Integration Services now. I expect that the Microsoft Visual Studio Tools for Applications x64 Runtime 3.0 will be installed since it was installed in my first test.
Well, suprisingly enough, it didn”t install it… The SQL Server Installation wizard saw that the Microsoft Visual Studio Tools For Applications 3.0 was already installed (see picture below)

And when I verified the “Program and Features”, the Microsoft Visual Studio Tools for Applications x64 Runtime 3.0 was still missing.

  1. Installation of SQL Server Data Tools and Integration Services at the same time

I started again from a clean situation and installed SQL Server Data Tools and Integration Services at the same time.
The SQL Server Installation Wizard detects that it still has to install the Microsoft Visual Studio Tools for Applications 3.0, which is a good thing.

Finally, both x64 and x86 Tools have been installed!

Now, let”s see if my package will upgrade now without any error?

YES, IT DID!!!

 

If you would like to have this issue fixed, please vote on the following link

https://connect.microsoft.com/SQLServer/feedback/details/776248/could-not-load-file-or-assembly-microsoft-visualstudio-tools-applications-core

Remove database mirroring with SQLCMD

As an addition to my previous post, I just want to share this script to remove the database mirroring from a database with SQLCMD.  Again, make sure that you have activated the SQLCMD Mode in your management studio (Menu – Query – SQLCMD Mode).

Remark: The database on the mirror server will be removed.

/****DEFINE THE VARIABLES THAT ARE NECESSARY TO REMOVE SQL DATABASE MIRRORING***/
:setvar MYDATABASE [AdventureWorks2012]
:setvar PRINCIPAL SQLSRV01
:setvar MIRROR SQLSRV02

:CONNECT $(PRINCIPAL)
GO
ALTER DATABASE $(MYDATABASE) SET PARTNER OFF
GO
:CONNECT $(MIRROR)
GO
RESTORE DATABASE $(MYDATABASE) WITH RECOVERY
GO
DROP DATABASE $(MYDATABASE)
GO

Setup Database Mirroring with SQLCMD

Hi all, I just want to share this script.
Today I had to setup SQL Database Mirroring for 12 databases. Instead of configuring this manually with the SQL Server Management Studio, I created a script to implement database mirroring by using SQLCMD. The script must be executed in SQLCMD Mode in the management studio. To activate this select in the menu Query – SQLCMD Mode. On the mirror server, I’m using the default data and log path to store the database files and the log files.
Just fill in the correct variables:
Variable Description
MYDATABASE Database name that you want to mirror
PRINCIPAL Principal SQL Server
MIRROR Mirror SQL Server
WITNESS Witness SQL Server. If you don”t want to use automatic failover, leave the WITNESS variable empty
DOMAIN The domain of the servers
BACKUPPATH Specify where the full backup file should be stored
RESTOREPATH Specify where the path where the backup file will be stored on the mirror server
COPYPATH Specify the path where the backup files should be copied to
PrincipalListenerPort Principal listener port
MirrorListenerPort Mirror listener port
WitnessListenerPort Witness listener port
Timeout Seconds to wait until automatic failover
/****DEFINE THE VARIABLES THAT ARE NECESSARY TO SETUP SQL DATABASE MIRRORING***/
:setvar MYDATABASE [AdventureWorks2012]
:setvar PRINCIPAL SQLSRV01
:setvar MIRROR SQLSRV02
:setvar WITNESS “SQLWITNESS”
:setvar DOMAIN pieter.local
:setvar BACKUPPATH I:MSSQLBackupMirroring
:setvar RESTOREPATH I:MSSQLBackupMirroring
:setvar COPYPATH i$MSSQLBackupMirroring
:setvar PrincipalListenerPort 5022
:setvar MirrorListenerPort 5022
:setvar WitnessListenerPort 5022
:setvar Timeout 10
SET NOCOUNT ON
GO
–Verify if witness is defined
if ‘$(WITNESS)’ =”
print ‘*** WARNING: NO witness server has been defined ***’

–1. Create endpoints on principal, mirror and witness.
:CONNECT $(PRINCIPAL)
GO
–*** Creating endpoint on the principal ***
if exists (select 1 from sys.endpoints where name = ‘Mirroring’)
print ‘*** WARNING: Endpoint ”Mirroring’2014-03-28 19:35:36′ already exists on $(PRINCIPAL) ***’
else
CREATE ENDPOINT Mirroring STATE=STARTED AS tcp (listener_port=$(PrincipalListenerPort)) FOR database_mirroring (ROLE=all)
GO
:CONNECT $(MIRROR)
GO
–*** Creating endpoint on the mirror ***
if exists (select 1 from sys.endpoints where name = ‘Mirroring’)
print ‘*** WARNING: Endpoint ”Mirroring” already exists on $(MIRROR) ***’
else
CREATE ENDPOINT Mirroring STATE=STARTED AS tcp (listener_port=$(MirrorListenerPort)) FOR database_mirroring (ROLE=all)
GO
:CONNECT $(WITNESS)
GO
–*** Creating endpoint on the witness ***
if exists (select 1 from sys.endpoints where name = ‘Mirroring’)
print ‘*** WARNING: Endpoint ”Mirroring” already exists on $(WITNESS) or witness server was not defined ***’
else
CREATE ENDPOINT Mirroring STATE=STARTED AS tcp (listener_port=$(WitnessListenerPort)) FOR database_mirroring (ROLE=WITNESS)
–2. Take Full Backup and COPY backup files to the mirror
:CONNECT $(PRINCIPAL)
GO
print ‘*** Take full backup of principal database $(MYDATABASE) ***’
IF  (left(cast(SERVERPROPERTY(‘ProductVersion’)as varchar),5)=’10.00′ and SERVERPROPERTY(‘EngineEdition’)=3) OR (left(cast(SERVERPROPERTY(‘ProductVersion’)as varchar),5)=’10.50′ and SERVERPROPERTY(‘EngineEdition’)in(2,3))
BEGIN
BACKUP DATABASE $(MYDATABASE) TO DISK = ‘$(BACKUPPATH)$(MYDATABASE).bak’
WITH  COPY_ONLY, NOFORMAT, INIT,  NAME = ‘$(MYDATABASE) Full Database Backup’, SKIP, NOREWIND, NOUNLOAD, STATS = 10,COMPRESSION
print ‘*** Take transaction log backup of principal database $(MYDATABASE) ***’
BACKUP LOG $(MYDATABASE) TO  DISK = ‘$(BACKUPPATH)$(MYDATABASE).trn’
WITH NOFORMAT, INIT,  NAME = ‘$(MYDATABASE) Transaction Log Backup’, SKIP, REWIND, NOUNLOAD, COMPRESSION,  STATS = 10,COMPRESSION
END
ELSE
BEGIN
BACKUP DATABASE $(MYDATABASE) TO DISK = ‘$(BACKUPPATH)$(MYDATABASE).bak’
WITH  COPY_ONLY, NOFORMAT, INIT,  NAME = ‘$(MYDATABASE) Full Database Backup’, SKIP, NOREWIND, NOUNLOAD, STATS = 10
print ‘*** Take transaction log backup of principal database $(MYDATABASE) ***’
BACKUP LOG $(MYDATABASE) TO  DISK = ‘$(BACKUPPATH)$(MYDATABASE).trn’
WITH NOFORMAT, INIT,  NAME = ‘$(MYDATABASE) Transaction Log Backup’, SKIP, REWIND, NOUNLOAD, COMPRESSION,  STATS = 10
END
GO
print ‘*** Copy principal database $(MYDATABASE) from principal server $(PRINCIPAL) to mirror server $(MIRROR) ***’
!!ROBOCOPY $(BACKUPPATH) $(MIRROR)$(COPYPATH) $(MYDATABASE).*
GO

–3. Restore database on the mirror
:CONNECT $(MIRROR)
GO
print ‘*** Create database directories ***’
DECLARE @root_folder nvarchar(128);
EXECUTE [master].[dbo].[xp_instance_regread]
@rootkey = N’HKEY_LOCAL_MACHINE’,
@key = N’SoftwareMicrosoftMSSQLServerMSSQLServer’,
@value_name = N’DefaultData’,
@value = @root_folder OUTPUT;

select @root_folder=@root_folder + ‘$(MYDATABASE)’
EXEC master.dbo.xp_create_subdir @root_folder

EXECUTE [master].[dbo].[xp_instance_regread]
@rootkey = N’HKEY_LOCAL_MACHINE’,
@key = N’SoftwareMicrosoftMSSQLServerMSSQLServer’,
@value_name = N’DefaultLog’,
@value = @root_folder OUTPUT;

select @root_folder=@root_folder + ‘$(MYDATABASE)’
EXEC master.dbo.xp_create_subdir @root_folder

GO
print ‘*** Restore full backup of database $(MYDATABASE) ***’
RESTORE DATABASE $(MYDATABASE)
FROM  DISK = ‘$(RESTOREPATH)$(MYDATABASE).bak’
WITH  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 10,REPLACE
GO
print ‘*** Restore transaction log backup of database $(MYDATABASE) ***’
RESTORE LOG $(MYDATABASE) FROM  DISK = ‘$(RESTOREPATH)$(MYDATABASE).trn’
WITH  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 10
GO

–4.Activate Mirroring
:CONNECT $(MIRROR)
GO
print ‘*** Set partner on the Mirror DB ***’
ALTER DATABASE $(MYDATABASE) SET PARTNER = ‘TCP://$(PRINCIPAL).$(DOMAIN):$(PrincipalListenerPort)’
GO
:CONNECT $(PRINCIPAL)
GO
print ‘*** Set partner on the Principal DB ***’
ALTER DATABASE $(MYDATABASE) SET PARTNER = ‘TCP://$(MIRROR).$(DOMAIN):$(MirrorListenerPort)’
GO
print ‘*** Set PARTNER SAFETY FULL on the Principal***’
ALTER DATABASE $(MYDATABASE) SET PARTNER SAFETY FULL
GO
print ‘*** Declare the witness on the Principal ***’
if ‘$(WITNESS)’ <> ”
ALTER DATABASE $(MYDATABASE) SET WITNESS = ‘TCP://$(WITNESS).$(DOMAIN):$(WitnessListenerPort)’
print ‘*** Setting the timeout on the principal to $(TIMEOUT) seconds ***’
ALTER DATABASE $(MYDATABASE) SET PARTNER TIMEOUT $(TIMEOUT)
GO

If you should have remarks or suggestions, just let me know.
Happy mirroring!