How I completely remove a SQL Server database – the right way!

I was shocked today!
I found this blog post on how you should remove a SQL Server database. I didn’t know if I should started crying or laughing…
This blogpost might be ridiculous for some of you but looks like it might be useful after all.

There are 2 possibilities to delete a database


With the SQL Server Management Studio you can right click on the database and select “Delete”

In the delete object window select the option “Delete backup and restore history information for databases” if you want to remove this information.
If you want to kick out open connections to your database select the “Close existing connections”. It will be impossible to remove the database if you don’t select the last option and there are still open connections to your database. You will receive an error that the database is still in use and cannot be deleted.

Once you hit the OK button, the database will be removed of the SQL instance and the database files on the OS level will also be removed. Definitely not necessary to shut down the whole instance to remove a database…


The script below does exactly the same as the actions that were taken in the GUI and have the same result. The database will be removed!

--This will remove the backup history
EXEC msdb.dbo.sp_delete_database_backuphistory @database_name = N'DBToDelete'
USE [master]
--This will drop all the open connections
USE [master]
--This will remove my database

Now after the removal you still have some extra cleanup stuff to do that people often forget…

Delete the jobs

Delete the jobs that were related to the database. If you will not remove them, the jobs will fail and you will receive unnecessary alerts.

Delete the backup files

If you don’t need the backup files anymore, just remove them. However, I would recommend to keep the last full backup of the database and archive it for at least a year or 2. You never know that somebody needs the data in the future… J

Delete the logins without DB user

Your database had probably some database users configured that were linked to a server login.
If that server login isn’t used for any other database user and isn’t member of any server role besides public, I would recommend to remove that login. Not only for security reasons but also to keep your server clean.

Well, I hope that this can help some junior DBA’s and that they find my blog post instead of the other one J


Remove Azure Subscriptions from SSMS

Last week I was preparing my pre-con for SQL Server Days.
I was playing around a lot with the new Azure features because that will be a big part of my pre-con.
When I was trying to deploy a database to my Azure VM with the new “Deploy database to Azure VM” wizard,
I noticed that there were a lot of double subscriptions that I could use. This was probably caused by the testing that I have done lately ;)
Adding a subscription is easy but I couldn’t find any way to remove a subscription from the dropdownbox.

Double Subscriptions

As an MVP, you get the privilege to ask questions to other SQL MVP’s and the Microsoft Product Team.
So, I posted this question and got quickly response from a Microsoft SQL Server Program Manager.
The solution is public information, so I’m happy to share it with you.

The references to the subscriptions are stored under the following directory

%USERPROFILE%\Documents\SQL Server Management Studio\Deploy DB to Windows Azure VM


I removed them and went back to my deploy wizard but I still got one subscription available.
Default file

Apparently, that is my active subscription.
To get rid of this active subscription, you need to update the deploymentsettings.xml file in the folder

%USERPROFILE%\AppData\Local\SQL Server\Deploy to SQL in WA VM

When you should open the xml file you will see indeed the subscription.

After removing the deploymentsettings.xml file, I got my “clean” dropdownbox back.

This is something that you won’t need a lot but always good to know!


Speaking at SQL Server Days 2014

On the 30th of September and the 1st of October, the Belgian SQL Server User Group is organizing for the 7th time SQL SERVER DAYS.
Over the last few years SQL Server Days has become more and more famous in the SQL Server Community, and that’s really a fact if you look at the impressive conference schedule!

This year SQL Server Days 2014 will again be a two-day conference.
On the first day, there are deep dive trainings given by international and national speakers.
The second day will be the main technical conference for database administrators, developers and business intelligence professionals!

4 years ago I went to SQL Server Days as an attendee, the other 3 previous years I was selected to give a regular session.
But this year, I’m really honored to present my first pre-con “SQL Server 2014: What’s in it for you?
I’ve recently wrote a blog post why it’s so hard to convince people to upgrade to SQL Server 2014.
Well, in this pre-conference, I will dig into the new features of SQL Server 2014.
I will put a strong focus on the new Windows Azure and hybrid cloud functionalities and of course other features
like In-memory OLTP engine, Buffer Pool Extension, Enhanced Windows Server 2012 Integration, and many more…
The session will be bulk loaded with demos and hopefully I can convince people to upgrade ;)

Again, I’m really honored to be part of the event and really looking forward to meet some fine other SQL community speakers like,
Grant Fritchey, Bob Duffy, Denny Cherry, Joey d’Antoni and many more!
SQL Server Days 2014 is hosted at San Marco Village, which is an excellent venue close to Antwerp.
You really don’t want to miss this event and the opportunity for networking with the Microsoft Data Platform professionals!

You can still register here.

Well, I hope to see you all at SQL Server Days! Really looking forward to it!

Last known good DBCC CHECKDB not always correct

On the internet you can find a lot of methods to find out when DBCC CHECKDB has last run on your databases. The 2 most common queries are the based on DBCC DBINFO() and DBCC PAGE.
Below you can find 2 examples of this (replace “YourDatabase” with the correct database name):

USE YourDatabase
	(ParentObject varchar(1000) NULL,Object varchar(1000) NULL,Field varchar(1000) NULL,Value varchar(1000) NULL)
insert into #tmp
select cast(value as datetime) from #tmp where field='dbi_dbcclastknowngood'
drop table #tmp


DBCC PAGE (YourDatabase, 1, 9, 3);

In my morning DBA checklist, I’m using the first query to check if there has been a database integrity check for the last 7 days. This query works well, however, last week I’ve noticed an issue. As soon as a database is in read-only or emergency mode, the DBCC CHECKDB command will succeed but the dbi_dbcclastknowngood field will (or can) not be updated. For that reason, my check indicates a failed status for my read-only databases…

Let’s take an example. First of all, I’m going to perform a DBCC CHECKDB for my AdventureWorks database

DBCC CHECKDB(AdventureWorks2012)

Once the check is finished, let’s verify the last dbcc check date with the DBCC DBINFO() query.


Now, let’s set the database to read-only and do the test again.

DBCC read only

As you might notice, the last DBCC CHECKDB time is the same in both cases.

I found a good tip (old blog post) of Jonathan Kehayias to find the correct information. Just scan the SQL Error Log for the last 7 days. If I check the logs from my previous tests, you can see that there was a DBCC CHECKDB at 14:57:22 which was on my read-only database.

Error Log

I’ve modified the script of Jonathan. I’m using the xp_readerrorlog and I’m not checking the SQL 2000 version anymore… I’ve also changed my DBCC Check in my DBA checklist with the query below.

	  Logdate datetime ,
	  ProcessInfo VARCHAR(12) ,
	  MsgText VARCHAR(3000)
CREATE TABLE #NumberOfErrorLogFiles
		ArchiveID tinyint,
		FileDate datetime,
		FileSize int
insert into #NumberOfErrorLogFiles
EXEC master..xp_enumerrorlogs

DECLARE @SinceDate datetime  
SET @SinceDate = DATEADD(dd, -7, GETDATE())
DECLARE @ErrorFile tinyint
DECLARE @Severity tinyint
DECLARE @SeverityString nvarchar(3)
DECLARE @LastLogFile tinyint
select @LastLogFile=min(ArchiveID) from #NumberOfErrorLogFiles
where FileDate<@SinceDate
DECLARE @DBCCStatement nvarchar(1024)
set @DBCCStatement=N'DBCC CHECKDB (' + DB_NAME() + ')'

set @ErrorFile=0
while @ErrorFile<=ISNULL(@LastLogFile,0)
    insert into #Errorlog
	EXEC master.dbo.xp_readerrorlog @ErrorFile, 1, @DBCCStatement, NULL, @SinceDate, NULL, N'ASC'
	set @ErrorFile=@ErrorFile+1

select * from #ErrorLog where Logdate>@SinceDate

drop table #ErrorLog
drop table #NumberOfErrorLogFiles

Have fun!


Speaking at SQL Saturday Oslo #317

To end the holiday period in Belgium in style, I’m going on a trip to Oslo.
Not on vacation but to speak on the first edition of SQL Saturday!
Well, in some cases, this might be considered as a small vacation ;)

I’m very honored and exited that my session “How to manage many SQL Servers the easy way” has been selected.
It was really an impressive list of submitted session but my session made it!
It’s the second time that I’ll be presenting this session on a SQL Saturday.
First time was on SQL Saturday Exeter a few months ago and got really good feedback!
The purpose of this sessions is to point out which SQL Server tools you can use for multi-server management.
It is bulk loaded with demos and it will give you a good idea what features can be helpful in your environment.

I’m really looking forward to meet Cathrine Willemsen (twitter | blog) and Tone Hansson (twitter | blog), the organizers of the event, who are doing an excellent job!
A lot of other great SQL names are on the schedule. Looking forward to meet some new speakers like Tim Ford, Bob Duffy,Niko Neugebauer, Régis Baccaro, Kalen Delaney and many others.
And of course the other international speakers that I have met before like Boris Hristov, Matija Lah, André Kamman, Marco Russo, …

Hope to see you all there!

Tracking login and user changes in SQL Server

Did you ever wanted to track SQL Server logins and database user changes on all your SQL Servers? At one of my customers, the IT guys are sysadmins on the SQL Servers. In fact, the IT department is managing the SQL Servers when I’m not around. One day, “somebody” removed a user and of course, nobody did it ;)

First thing that came up to me was implementing SQL Server Audit. Good solution but not all versions of SQL Server are supporting SQL Server Audit. SQL Server 2008 and R2 are supporting SQL audit only in Enterprise Edition and there is no audit at all on SQL Server 2005. So, I had to come up with a solution that worked on these versions as well. My solution is based on this blog post of Aaron Bertrand (twitter | blog) but I’ve extended it.

Server Audit

I’ve implemented SQL Server Audit on all SQL Server 2008 and R2 Enterprise servers and SQL 2012 servers. Unfortunately no SQL 2014 yet, but you should also implement SQL Audit on this version The audit is dumping all the audit information into a audit log file on the server.

USE [master]

CREATE SERVER AUDIT [Audit Login Changes]
	,MAXSIZE = 1024 MB
	,MAX_FILES = 10
(	QUEUE_DELAY = 1000
	,AUDIT_GUID = '33e055a9-1924-47c3-9798-e8875a5bcd42'


I defined following Audit Action Types because I wanted to know all changes made on logins, roles and users.

Audit Action Type Description
DATABASE_ROLE_MEMBER_CHANGE_GROUP Raised whenever a login is added to or removed from a database role
SERVER_ROLE_MEMBER_CHANGE_GROUP Raised whenever a login is added or removed from a fixed server role
DATABASE_PERMISSION_CHANGE_GROUP Raised for any database permission
SERVER_OBJECT_PERMISSION_CHANGE_GROUP Raised whenever a GRANT, REVOKE, or DENY is issued for a server object permission
SERVER_PERMISSION_CHANGE_GROUP Raised when a GRANT, REVOKE, or DENY is issued for permissions in the server scope, such as creating a login
DATABASE_PRINCIPAL_CHANGE_GROUP Raised when users, are created, altered, or dropped from a database
SERVER_PRINCIPAL_CHANGE_GROUP Raised when server principals are created, altered, or dropped

More information on the Audit Action Types can be found here.

USE [master]

FOR SERVER AUDIT [Audit Login Changes]

Event Notifications

As mentioned before, we cannot use SQL Audit for SQL 2008 and R2 Standard Edition or SQL 2005. As an alternative, you can setup event notifications which will capture messages via Service Broker. The scripts below are based on the scripts of Aaron but I’ve added more events to it as I wanted to trace more than just “change password”

Create the following table in the msdb database

USE [msdb];

CREATE TABLE dbo.SecurityChangeLog
    ChangeLogID			 int IDENTITY(1,1),
	LoginName            SYSNAME,
    UserName             SYSNAME,
    DatabaseName         SYSNAME,
    SchemaName           SYSNAME,
    ObjectName           SYSNAME,
    ObjectType           VARCHAR(50),
    DDLCommand           VARCHAR(MAX),


Setup the queue

CREATE QUEUE SecurityChangeQueue;

CREATE SERVICE SecurityChangeService ON QUEUE SecurityChangeQueue


Setup the event notificiation. If you check the “FOR”-clause, you will notice that these are the same actions as defined in the SQL Audit Specification.

    TO SERVICE 'SecurityChangeService', 'current database';


Install the following stored procedure to log all the event notifications into the table we’ve just created. You might notice in the loop that I’m checking the version of SQL Server for some events. This is because the event notification content is different for SQL 2008 (R2) and SQL 2005.

USE [msdb];
CREATE PROCEDURE [dbo].[LogSecurityChange]
	DECLARE @version int
    DECLARE @message_body XML;
	set @version = (SELECT convert (int,REPLACE (LEFT (CONVERT (varchar, SERVERPROPERTY ('ProductVersion')),2), '.', '')))

    WHILE (1 = 1)
         RECEIVE TOP(1) @message_body = message_body
         FROM dbo.SecurityChangeQueue
       ), TIMEOUT 1000;

       IF (@@ROWCOUNT = 1)
		if CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/EventType)')) in ('DROP_USER','CREATE_USER','ALTER_USER') or @version>9
			INSERT dbo.SecurityChangeLog(LoginName,UserName,DatabaseName,SchemaName,ObjectName,ObjectType,DDLCommand) 
			SELECT CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/LoginName)')), 
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/UserName)')),
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/DatabaseName)')),
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/DefaultSchema)')),
			    CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/ObjectName)')),
		        CONVERT(VARCHAR(50), @message_body.query('data(/EVENT_INSTANCE/ObjectType)')),
		        CONVERT(VARCHAR(MAX), @message_body.query('data(/EVENT_INSTANCE/TSQLCommand/CommandText)'))
			INSERT dbo.SecurityChangeLog(LoginName,UserName,DatabaseName,SchemaName,ObjectName,ObjectType,DDLCommand) 
			SELECT CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/LoginName)')), 
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/UserName)')),
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/DatabaseName)')),
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/SchemaName)')),
			    CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/ObjectName)')),
		        CONVERT(VARCHAR(50), @message_body.query('data(/EVENT_INSTANCE/ObjectType)')),
		        CONVERT(VARCHAR(MAX), @message_body.query('data(/EVENT_INSTANCE/EventType)')) + ' ' + 
				CONVERT(VARCHAR(MAX), @message_body.query('data(/EVENT_INSTANCE/RoleName)')) + ' FOR ' +
				CONVERT(VARCHAR(MAX), @message_body.query('data(/EVENT_INSTANCE/LoginType)')) + ' ' +
				CONVERT(VARCHAR(MAX), @message_body.query('data(/EVENT_INSTANCE/ObjectName)'))


Last step is modifying the queue so that it will use the stored procedure and starts tracking the login and user changes.

ALTER QUEUE SecurityChangeQueue
   PROCEDURE_NAME = dbo.LogSecurityChange,

Bring it all together

All the information is now logged on each individual server. I want all the audit log information in one central database so that the customer can run reports on it. First of all, I’ve created a database on one of the SQL Servers and created a table that will store all the audit logs of all the servers.

CREATE TABLE [dbo].[SecurityChangeLog](
	[ChangeLogID] [int] IDENTITY(1,1) NOT NULL,
	[ServerName] [sysname] NOT NULL,
	[LoginName] [sysname] NOT NULL,
	[DatabaseName] [sysname] NOT NULL,
	[ObjectName] [nvarchar](128) NULL,
	[ObjectType] [varchar](50) NULL,
	[DDLCommand] [nvarchar](max) NULL,
	[EventTime] [datetime2](7) NULL,
	[ChangeLogID] ASC


Next step is retrieving all the log information from each server. I was thinking of using an SSIS package for this but it looked more easily to use Powershell instead. The script below is based on the central management server that I’m using. First I’m retrieving all the SQL Servers where I have to collect the audit data. This is stored in the table msdb.dbo.sysmanagement_shared_registered_servers_internal of the central management server. Make sure that you change YOURGROUPID with the appropriate groupid of your CMS and YOURCMS with the instance name of your CMS.

Once we have the list, I execute a script GetAuditLog.sql against each instance to retrieve the audit information.

The final step will write the retrieved audit information into the database. Make sure you change the YOURDATABASENAME and YOURSQLSERVERINSTANCE in the script to your database name and SQL instance.

Schedule a SQL Agent Job to execute the powershell script on a daily basis.


#Set-ExecutionPolicy -ExecutionPolicy Unrestricted
Import-Module “sqlps” -DisableNameChecking
$InputFile = "YOURFILEPATH + GetAuditLog.sql"

#Get the list of servers from the Central Management Server
$InstanceNameList = Invoke-Sqlcmd -Query "select server_name from msdb.dbo.sysmanagement_shared_registered_servers_internal 
                                          where server_group_id=YOURGROUPID" -ServerInstance "YOURCMS"
Clear-variable -Name results
Clear-variable -Name record

#Get Audit information
foreach($instanceName in $instanceNameList) 
    write-host "Executing query against server: " $instanceName.server_name
    $results += Invoke-Sqlcmd -InputFile $InputFile -ServerInstance $instanceName.server_name
#Save the Audit Log into the database
foreach($record in $results)
#write-host "ServerName: " $record.ServerName " " $record.DDLCommand
Invoke-Sqlcmd -Query "insert into [YOURDATABASENAME].[dbo].[SecurityChangeLog] ([ServerName],[LoginName],[DatabaseName],[ObjectName],[ObjectType],[DDLCommand],[EventTime])  
                      Values('$($record.ServerName)','$($record.LoginName)','$($record.DatabaseName)','$($record.ObjectName)','$($record.ObjectType)','$($record.DDLCommand)','$($record.EventTime)')" -ServerInstance "YOURSQLSERVERINSTANCE" 


Below you can find the SQL script that is used to retrieve the audit information in the powershell script. Dependent on the version we will read the audit file or the event notification table. I’m only selecting the logs of the last day because the powershell is scheduled to run on a daily basis.

declare @AuditLog  table (
LoginName sysname,
DatabaseName sysname,
ObjectName nvarchar(128),
ObjectType varchar(50),
DDLCommand nvarchar(4000),
EventTime datetime)
declare @LogFilePath varchar(256)
declare @version int
set @version = (SELECT convert (int,REPLACE (LEFT (CONVERT (varchar, SERVERPROPERTY ('ProductVersion')),2), '.', '')))
IF (@version=10 and (select convert(varchar,SERVERPROPERTY('Edition'))) not like '%Standard%') OR (@version=11)
	select @LogFilePath=log_file_path from sys.server_file_audits
	where name='Audit Login Changes'
	IF @LogFilePath IS NOT NULL
		set @LogFilePath=@LogFilePath +'*.sqlaudit'
		insert into @AuditLog
		SELECT session_server_principal_name,database_name,object_name,m.class_type_desc,statement,event_time FROM sys.fn_get_audit_file (@LogFilePath,default,default) f
		INNER JOIN sys.dm_audit_class_type_map m on m.class_type=f.class_type;
		select serverproperty('Servername') as ServerName,LoginName,DatabaseName,ObjectName,ObjectType,replace(DDLCommand,'''','''''') as DDLCommand,EventTime from @AuditLog
		where EventTime>dateadd(dd,-1,getdate())
	IF exists (select name from msdb.sys.tables where name='SecurityChangeLog')
		select serverproperty('Servername') as ServerName,LoginName,DatabaseName,ObjectName,ObjectType,replace(DDLCommand,'''','''''') as DDLCommand,EventTime from msdb.dbo.SecurityChangeLog
	    where EventTime>dateadd(dd,-1,getdate()) and DDLCommand<>''


To make it easy for the customer to retrieve the data, I’ve created this report.

Security ChangesI know that this solution is not really secure because people can just stop the audit, disable the queue or delete the records from the tables but I think this is a good start to track the login and user modifications.

Have fun!

It’s hard to convince clients to use SQL 2014

SQL Server 2014 has been released since the 1st of April and there are some really cool features that might help you and your organization and can definitely justify an upgrade. In my job as a SQL consultant, I have a lot of different customers and none of them is currently working on SQL Server 2014 or is planning to upgrade to this new version. I find it really hard to convince my clients to upgrade or even install it for a brand new installation.

Why is it so hard to convince people to start using SQL Server 2014…? After questioning my customers I figured out these 5 reasons?

I don’t trust it

This is probably the most famous argument of my customers. They don’t trust the new version because it has just been released and has probably a lot of bugs. Let’s wait till the first service pack has been released, and then we know it is stable… Most of my clients are telling me that there are just too many risks to install an RTM version of SQL Server. “What happens if something happens and we’re not prepared for it?” I don’t think Microsoft will release a version of SQL Server that is not stable or not fully tested. It’s true that in the beginning of a new release, more bugs are found, but I wouldn’t state that the new release is unstable. Microsoft is still releasing Service Packs and Cumulative Updates for older versions of SQL Server. Does this mean that this software is unstable? Well.., no…, because you’re probably using it right now. Bug fixes are always part of the software despite if it’s a new or old version.

Oh, is there a new release?

I’ve noticed that many people don’t know that a new version has been released. They just install the older version of which they know of. Or they have heard about a new release but are not aware of all the new cool functionalities that can be a benefit for their company. And that is where we, the SQL Family, come in. We are trying to make people aware of all these new features with SQL events, webinars, workshops, blogs,…

My vendor doesn’t support it yet…

Another reason that I hear a lot is that the vendor of a software package (which is using a SQL database), does not support the new version yet. I even had to install a brand new SQL Server 2005 last month because the vendor didn’t support another version… I told my customer to get rid of that software and find a good alternative. The vendor’s software will probably work on SQL Server 2014 but if you should use it, you will lose your full support on the software… Try the upgrade advisor and see if there is some exotic configuration that will not be working anymore. I doubt it… I usually send the report to my customer and the supplier. And I’ve noticed lately that suppliers are giving me a good response that they will support SQL Server 2014 in a next major release.

It’s too expensive

Upgrading to a new version of SQL Server is a project that requires some investments. Not only the license cost but also new hardware, re-coding your application, testing … Many customers are complaining that the enterprise edition has become really expensive but do they really need the enterprise edition? From SQL Server 2008 onwards, you can use the DMV sys.dm_db_persisted_sku_features to view all enterprise edition-specific features that are enabled in the current database. If they do need the enterprise edition, it might worth starting a consolidation project instead of and upgrade project to save license costs.

The upgrade can also save you money because new features can save resources like page and row compression, resource governor or the new T-SQL improvements can save your developers a lot of time.

No time, too much work

I hear this a lot… “Upgrade…? Sorry but we don’t have time for that… Why should we upgrade, the database is working just fine and we have other more important things on our agenda.” I don’t think that it has something to do with “no time”. Some people are just not motivated to do an upgrade. It’s hard to convince people that are not willing to learn and are not open to new ideas. And maybe that’s because of the above mentioned point, they don’t trust the software and it will cost the company a lot of money.

I don’t know if I’m the only one that is experiencing this is :) ? But I hope that I can change the perception of my clients … but it’s a tough job… believe me…

How to change the database owner for the secondary replica database

When you perform a restore of a database to a SQL Server Instance where the database doesn’t exist, the owner of the database is changed to the user that is executing the restore. Actually, this also the case for attaching a database. The ownership can give the login extra permissions which in some cases you don’t want. A good rule of thumb is to choose “sa” as the database owner. This can be easily done by executing the following query after the restore is done.


Just keep in mind, when you change the owner of the database that the user that owned the database might have less or no permission anymore on the database.

Unfortunately, if you restore a database as read-only or if you leave it into restoring mode for AlwaysOn Availability Group configuration, you cannot change database owner anymore.
Even if you setup an AlwaysOn Avaliability Group with the wizard of the SQL Server management studio, the owner of the database on the secondary replica will be the user that is executing the wizard. Quite obvious because in the background the wizard is performing the restore J.
Your database must be in a read/write mode before you can change the database owner.

If the database is read-only, it can briefly be set to Read/Write for the modification, and then set back. That shouldn’t be an issue.
In case of AlwaysOn Availability Groups, you will have to perform a failover to the secondary replica, change the database owner and failback to the original primary replica.
But, I can imagine that there will be situations where a failover will not be possible, like for example asynchronous commit mode or just the business that doesn’t want downtime.

A possible workaround could be to manually prepare the secondary database for the availability group but perform the restore as the user sa. The following script can be used

EXECUTE AS login='sa'
RESTORE DATABASE [YourDatabaseName] FROM DISK = '<YourBackupPath>' 

More information on Manually Prepare a Secondary Database for an Availability Group can be found here

It’s a simple script but it does change the owner of the database to sa and you don’t have to worry about a failover to set the correct database owner.


Test your scripts against different database states

Probably you all have created some scripts that were just working fine on your instance but they didn’t work on production. I’ve seen a lot scripts that were running just fine as long as all the databases have a status ONLINE. But as soon as a database has another state, the scripts start to fail… The possible database states are:

    • ONLINE

More information on the database states can be found on You should handle the different database states in your scripts and test them against all different database states. Now… the challenge is to force your database in a particular state. In this blog post, I’m going to explain how you can create test databases with a different state.


That’s the easy part. When you create a new database, it will automatically become ONLINE



When you want to create an OFFLINE database you first have to create the database and run the ALTER DATABASE statement to bring it OFFLINE.


You can also bring a database offline with the GUI of the SQL Server Management Studio.


To create a database in RESTORING state, create a new database, take a FULL BACKUP of the database and RESTORE the database with the option NORECOVERY. Your database will not come ONLINE but it will stay in RESTORING mode.



This is a temporary state. Normally, a database is only “In Recovery” mode during startup. So, when SQL Server starts up the database or when you bring it back ONLINE from an OFFLINE state. I haven’t found a possible method to simulate a database that stays in RECOVERING mode…


Follow these steps to create a database with a RECOVERY PENDING state

  • Create an empty database
  • Stop the SQL Server Service
  • Rename or remove the database transaction log file of the database
  • Restart the SQL Server Service

You will notice that the database will now have a Recovery Pending state


I followed this blog post of Paul Randal (blog | twitter) to create a SUSPECT database. However, my database always turned in Recovery Pending state and not SUSPECT… In the blog post, Paul is performing and update in an uncommitted transaction. I changed it into an insert of an extra colleague (myself ;)) and stopped the instance. I opened the log file with the same XVI32 tool and searched for my name and changed that section. When I restarted the SQL Server instance, the engine was unable to perform a rollback of my transaction and the database finally got into a SUSPECT state. Paul describes in this article the differences between SUSPECT and RECOVERY PENDING.


When you want to create a database that needs to go into EMERGENCY mode, you first have to create the database and run the ALTER DATABASE statement.



Apart from the above mentioned states, I would also recommend to create the following test databases.

  • A database with name DB_EX@T!ç-$&ù!&.,.;=+#

Well, start testing your scripts and see how many of them could be improved.

Moving transaction log files

One of my customers took my advice and finally wanted to move the transaction log files to a separate disk. Probably, the current disk space issues on that server helped here a little bit as well ;)
The best way to move database and log files to a new location is by specifying the new file location in the FILENAME clause of the ALTER DATABASE statement.
To move a file of a database, follow these steps:

  1. Set the databases offline
  2. Move the file or files to the new location
  3. Alter the database for each file that has been moved
  4. Bring back the database online

This method works really well for a single database but if you have to repeat it twice, you can automate it.
And in my case, the customer wanted to move the transaction log files for all the databases (230) in 1 time.

The first part of the script will collect the current physical paths and file names of the transaction log files and will store it into a table variable.
Don’t forget to specify the parameter @NewPath which should contain the new path for your transaction log files.
Secondly, a loop will go over all the databases that are ONLINE and will execute the 4 actions as described above. With the “ONLINE” option, you avoid bringing OFFLINE databases back online.
If the move was successful, the script will delete the old transaction log file. I’m using xp_cmshell to perform the COPY and the DEL (I know…). If you should know any better solution, just drop me a comment!
Keep also in mind that this script only works within the same SQL Server Instance.

declare @DatabaseFileLocation as table  (
[Databaseid] int,
[DBName] sysname,
[FileType] int,
[name] varchar(256),
[physical_name] varchar(1024)

DECLARE @sqlcmd nvarchar(1024), @Newpath nvarchar(1024),@ID int

--Specify the new transaction log file folder
set @Newpath='E:'
set @ID=4

--Collect the current physical name of the transaction log file
SET @sqlcmd = N'use [?]
select DB_ID(), DB_NAME(), type, name, physical_name from sys.database_files where type=1'

Insert into @DatabaseFileLocation
exec sp_MSForEachDB @sqlcmd

while 0=0 
	select @ID=min(DATABASE_ID) from sys.databases where database_id&gt;@ID and state_desc='ONLINE'
	if @@rowcount=0 or @ID is null break
		--Set database OFFLINE
		select @sqlcmd= N'ALTER DATABASE [' + name + '] SET OFFLINE WITH ROLLBACK IMMEDIATE' from sys.databases
		where database_id=@ID
		print @sqlcmd
		exec sp_executesql @sqlcmd
		--COPY the transaction log file to the the new path
		SELECT @sqlcmd= 'COPY ' + physical_name + ' ' + REPLACE([physical_name],SUBSTRING(physical_name,0,LEN(physical_name)-CHARINDEX('&#092;',REVERSE(physical_name))+1),@Newpath) from @DatabaseFileLocation where Databaseid=@ID
		print @sqlcmd
		exec master..xp_cmdshell @sqlcmd
		--ALTER the database so that it will point to the new transaction log file location
		select @sqlcmd='ALTER DATABASE [' + DBName + '] MODIFY FILE ( NAME = ['+ name + '], FILENAME = ''' + REPLACE([physical_name],SUBSTRING(physical_name,0,LEN(physical_name)-CHARINDEX('&#092;',REVERSE(physical_name))+1),@Newpath) + ''')' from @DatabaseFileLocation where Databaseid=@ID
		print @sqlcmd
		exec sp_executesql @sqlcmd
		--Set database ONLINE
		select @sqlcmd= N'ALTER DATABASE [' + name + '] SET ONLINE' from sys.databases
		where database_id=@ID
		print @sqlcmd
		exec sp_executesql @sqlcmd
		--If all went well, remove the old transaction log files
		SELECT @sqlcmd= 'DEL ' + physical_name  from @DatabaseFileLocation where Databaseid=@ID
		print @sqlcmd
		exec master..xp_cmdshell @sqlcmd