Remove Azure Subscriptions from SSMS

Last week I was preparing my pre-con for SQL Server Days.
I was playing around a lot with the new Azure features because that will be a big part of my pre-con.
When I was trying to deploy a database to my Azure VM with the new “Deploy database to Azure VM” wizard,
I noticed that there were a lot of double subscriptions that I could use. This was probably caused by the testing that I have done lately ;)
Adding a subscription is easy but I couldn’t find any way to remove a subscription from the dropdownbox.

Double Subscriptions

As an MVP, you get the privilege to ask questions to other SQL MVP’s and the Microsoft Product Team.
So, I posted this question and got quickly response from a Microsoft SQL Server Program Manager.
The solution is public information, so I’m happy to share it with you.

The references to the subscriptions are stored under the following directory

%USERPROFILE%\Documents\SQL Server Management Studio\Deploy DB to Windows Azure VM

Files

I removed them and went back to my deploy wizard but I still got one subscription available.
Default file

Apparently, that is my active subscription.
To get rid of this active subscription, you need to update the deploymentsettings.xml file in the folder

%USERPROFILE%\AppData\Local\SQL Server\Deploy to SQL in WA VM

When you should open the xml file you will see indeed the subscription.

After removing the deploymentsettings.xml file, I got my “clean” dropdownbox back.

This is something that you won’t need a lot but always good to know!

Pieter

Speaking at SQL Server Days 2014

On the 30th of September and the 1st of October, the Belgian SQL Server User Group is organizing for the 7th time SQL SERVER DAYS.
Over the last few years SQL Server Days has become more and more famous in the SQL Server Community, and that’s really a fact if you look at the impressive conference schedule!

This year SQL Server Days 2014 will again be a two-day conference.
On the first day, there are deep dive trainings given by international and national speakers.
The second day will be the main technical conference for database administrators, developers and business intelligence professionals!

4 years ago I went to SQL Server Days as an attendee, the other 3 previous years I was selected to give a regular session.
But this year, I’m really honored to present my first pre-con “SQL Server 2014: What’s in it for you?
I’ve recently wrote a blog post why it’s so hard to convince people to upgrade to SQL Server 2014.
Well, in this pre-conference, I will dig into the new features of SQL Server 2014.
I will put a strong focus on the new Windows Azure and hybrid cloud functionalities and of course other features
like In-memory OLTP engine, Buffer Pool Extension, Enhanced Windows Server 2012 Integration, and many more…
The session will be bulk loaded with demos and hopefully I can convince people to upgrade ;)

Again, I’m really honored to be part of the event and really looking forward to meet some fine other SQL community speakers like,
Grant Fritchey, Bob Duffy, Denny Cherry, Joey d’Antoni and many more!
SQL Server Days 2014 is hosted at San Marco Village, which is an excellent venue close to Antwerp.
You really don’t want to miss this event and the opportunity for networking with the Microsoft Data Platform professionals!

You can still register here.

Well, I hope to see you all at SQL Server Days! Really looking forward to it!

Last known good DBCC CHECKDB not always correct

On the internet you can find a lot of methods to find out when DBCC CHECKDB has last run on your databases. The 2 most common queries are the based on DBCC DBINFO() and DBCC PAGE.
Below you can find 2 examples of this (replace “YourDatabase” with the correct database name):

USE YourDatabase
GO
CREATE TABLE #tmp
	(ParentObject varchar(1000) NULL,Object varchar(1000) NULL,Field varchar(1000) NULL,Value varchar(1000) NULL)
insert into #tmp
EXEC ('DBCC DBINFO() WITH TABLERESULTS')
select cast(value as datetime) from #tmp where field='dbi_dbcclastknowngood'
drop table #tmp

OR

DBCC TRACEON (3604); 
GO
DBCC PAGE (YourDatabase, 1, 9, 3);
GO

In my morning DBA checklist, I’m using the first query to check if there has been a database integrity check for the last 7 days. This query works well, however, last week I’ve noticed an issue. As soon as a database is in read-only or emergency mode, the DBCC CHECKDB command will succeed but the dbi_dbcclastknowngood field will (or can) not be updated. For that reason, my check indicates a failed status for my read-only databases…

Let’s take an example. First of all, I’m going to perform a DBCC CHECKDB for my AdventureWorks database

DBCC CHECKDB(AdventureWorks2012)

Once the check is finished, let’s verify the last dbcc check date with the DBCC DBINFO() query.

DBCC

Now, let’s set the database to read-only and do the test again.

DBCC read only

As you might notice, the last DBCC CHECKDB time is the same in both cases.

I found a good tip (old blog post) of Jonathan Kehayias to find the correct information. Just scan the SQL Error Log for the last 7 days. If I check the logs from my previous tests, you can see that there was a DBCC CHECKDB at 14:57:22 which was on my read-only database.

Error Log

I’ve modified the script of Jonathan. I’m using the xp_readerrorlog and I’m not checking the SQL 2000 version anymore… I’ve also changed my DBCC Check in my DBA checklist with the query below.

CREATE TABLE #ErrorLog
	(
	  Logdate datetime ,
	  ProcessInfo VARCHAR(12) ,
	  MsgText VARCHAR(3000)
	)
CREATE TABLE #NumberOfErrorLogFiles
	(
		ArchiveID tinyint,
		FileDate datetime,
		FileSize int
	)
insert into #NumberOfErrorLogFiles
EXEC master..xp_enumerrorlogs

DECLARE @SinceDate datetime  
SET @SinceDate = DATEADD(dd, -7, GETDATE())
DECLARE @ErrorFile tinyint
DECLARE @Severity tinyint
DECLARE @SeverityString nvarchar(3)
DECLARE @LastLogFile tinyint
select @LastLogFile=min(ArchiveID) from #NumberOfErrorLogFiles
where FileDate<@SinceDate
DECLARE @DBCCStatement nvarchar(1024)
set @DBCCStatement=N'DBCC CHECKDB (' + DB_NAME() + ')'

set @ErrorFile=0
while @ErrorFile<=ISNULL(@LastLogFile,0)
begin
    insert into #Errorlog
	EXEC master.dbo.xp_readerrorlog @ErrorFile, 1, @DBCCStatement, NULL, @SinceDate, NULL, N'ASC'
	set @ErrorFile=@ErrorFile+1
end

select * from #ErrorLog where Logdate>@SinceDate

drop table #ErrorLog
drop table #NumberOfErrorLogFiles

Have fun!

Pieter

Speaking at SQL Saturday Oslo #317

To end the holiday period in Belgium in style, I’m going on a trip to Oslo.
Not on vacation but to speak on the first edition of SQL Saturday!
Well, in some cases, this might be considered as a small vacation ;)

I’m very honored and exited that my session “How to manage many SQL Servers the easy way” has been selected.
It was really an impressive list of submitted session but my session made it!
It’s the second time that I’ll be presenting this session on a SQL Saturday.
First time was on SQL Saturday Exeter a few months ago and got really good feedback!
The purpose of this sessions is to point out which SQL Server tools you can use for multi-server management.
It is bulk loaded with demos and it will give you a good idea what features can be helpful in your environment.

I’m really looking forward to meet Cathrine Willemsen (twitter | blog) and Tone Hansson (twitter | blog), the organizers of the event, who are doing an excellent job!
A lot of other great SQL names are on the schedule. Looking forward to meet some new speakers like Tim Ford, Bob Duffy,Niko Neugebauer, Régis Baccaro, Kalen Delaney and many others.
And of course the other international speakers that I have met before like Boris Hristov, Matija Lah, André Kamman, Marco Russo, …

Hope to see you all there!

Tracking login and user changes in SQL Server

Did you ever wanted to track SQL Server logins and database user changes on all your SQL Servers? At one of my customers, the IT guys are sysadmins on the SQL Servers. In fact, the IT department is managing the SQL Servers when I’m not around. One day, “somebody” removed a user and of course, nobody did it ;)

First thing that came up to me was implementing SQL Server Audit. Good solution but not all versions of SQL Server are supporting SQL Server Audit. SQL Server 2008 and R2 are supporting SQL audit only in Enterprise Edition and there is no audit at all on SQL Server 2005. So, I had to come up with a solution that worked on these versions as well. My solution is based on this blog post of Aaron Bertrand (twitter | blog) but I’ve extended it.

Server Audit

I’ve implemented SQL Server Audit on all SQL Server 2008 and R2 Enterprise servers and SQL 2012 servers. Unfortunately no SQL 2014 yet, but you should also implement SQL Audit on this version The audit is dumping all the audit information into a audit log file on the server.

USE [master]
GO

CREATE SERVER AUDIT [Audit Login Changes]
TO FILE 
(	FILEPATH = N'ENTERYOURFILEPATHHERE'
	,MAXSIZE = 1024 MB
	,MAX_FILES = 10
	,RESERVE_DISK_SPACE = OFF
)
WITH
(	QUEUE_DELAY = 1000
	,ON_FAILURE = CONTINUE
	,AUDIT_GUID = '33e055a9-1924-47c3-9798-e8875a5bcd42'
)
ALTER SERVER AUDIT [Audit Login Changes] WITH (STATE = ON)
GO

 

I defined following Audit Action Types because I wanted to know all changes made on logins, roles and users.

Audit Action Type Description
DATABASE_ROLE_MEMBER_CHANGE_GROUP Raised whenever a login is added to or removed from a database role
SERVER_ROLE_MEMBER_CHANGE_GROUP Raised whenever a login is added or removed from a fixed server role
DATABASE_PERMISSION_CHANGE_GROUP Raised for any database permission
SERVER_OBJECT_PERMISSION_CHANGE_GROUP Raised whenever a GRANT, REVOKE, or DENY is issued for a server object permission
SERVER_PERMISSION_CHANGE_GROUP Raised when a GRANT, REVOKE, or DENY is issued for permissions in the server scope, such as creating a login
DATABASE_PRINCIPAL_CHANGE_GROUP Raised when users, are created, altered, or dropped from a database
SERVER_PRINCIPAL_CHANGE_GROUP Raised when server principals are created, altered, or dropped

More information on the Audit Action Types can be found here.

USE [master]
GO

CREATE SERVER AUDIT SPECIFICATION [Audit Login]
FOR SERVER AUDIT [Audit Login Changes]
ADD (DATABASE_ROLE_MEMBER_CHANGE_GROUP),
ADD (SERVER_ROLE_MEMBER_CHANGE_GROUP),
ADD (DATABASE_PERMISSION_CHANGE_GROUP),
ADD (SERVER_OBJECT_PERMISSION_CHANGE_GROUP),
ADD (SERVER_PERMISSION_CHANGE_GROUP),
ADD (DATABASE_PRINCIPAL_CHANGE_GROUP),
ADD (SERVER_PRINCIPAL_CHANGE_GROUP)
WITH (STATE = ON)
GO

Event Notifications

As mentioned before, we cannot use SQL Audit for SQL 2008 and R2 Standard Edition or SQL 2005. As an alternative, you can setup event notifications which will capture messages via Service Broker. The scripts below are based on the scripts of Aaron but I’ve added more events to it as I wanted to trace more than just “change password”

Create the following table in the msdb database

USE [msdb];
GO

CREATE TABLE dbo.SecurityChangeLog
(
    ChangeLogID			 int IDENTITY(1,1),
	LoginName            SYSNAME,
    UserName             SYSNAME,
    DatabaseName         SYSNAME,
    SchemaName           SYSNAME,
    ObjectName           SYSNAME,
    ObjectType           VARCHAR(50),
    DDLCommand           VARCHAR(MAX),
	EventTime			 DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
	CONSTRAINT PK_ChangeLogID PRIMARY KEY (ChangeLogID)
);

 

Setup the queue

CREATE QUEUE SecurityChangeQueue;
GO

CREATE SERVICE SecurityChangeService ON QUEUE SecurityChangeQueue
  ([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]);
GO

 

Setup the event notificiation. If you check the “FOR”-clause, you will notice that these are the same actions as defined in the SQL Audit Specification.

CREATE EVENT NOTIFICATION CreateLoginNotification
    ON SERVER WITH FAN_IN
    FOR CREATE_LOGIN,ALTER_LOGIN,DROP_LOGIN,CREATE_USER,ALTER_USER,DROP_USER,ADD_SERVER_ROLE_MEMBER,DROP_SERVER_ROLE_MEMBER,ADD_ROLE_MEMBER,DROP_ROLE_MEMBER
    TO SERVICE 'SecurityChangeService', 'current database';
GO

 

Install the following stored procedure to log all the event notifications into the table we’ve just created. You might notice in the loop that I’m checking the version of SQL Server for some events. This is because the event notification content is different for SQL 2008 (R2) and SQL 2005.

USE [msdb];
GO
CREATE PROCEDURE [dbo].[LogSecurityChange]
WITH EXECUTE AS OWNER
AS
BEGIN
    SET NOCOUNT ON;
	DECLARE @version int
    DECLARE @message_body XML;
	set @version = (SELECT convert (int,REPLACE (LEFT (CONVERT (varchar, SERVERPROPERTY ('ProductVersion')),2), '.', '')))

    WHILE (1 = 1)
    BEGIN
       WAITFOR 
       ( 
         RECEIVE TOP(1) @message_body = message_body
         FROM dbo.SecurityChangeQueue
       ), TIMEOUT 1000;

       IF (@@ROWCOUNT = 1)
       BEGIN
		if CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/EventType)')) in ('DROP_USER','CREATE_USER','ALTER_USER') or @version>9
		BEGIN
			INSERT dbo.SecurityChangeLog(LoginName,UserName,DatabaseName,SchemaName,ObjectName,ObjectType,DDLCommand) 
			SELECT CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/LoginName)')), 
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/UserName)')),
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/DatabaseName)')),
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/DefaultSchema)')),
			    CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/ObjectName)')),
		        CONVERT(VARCHAR(50), @message_body.query('data(/EVENT_INSTANCE/ObjectType)')),
		        CONVERT(VARCHAR(MAX), @message_body.query('data(/EVENT_INSTANCE/TSQLCommand/CommandText)'))
		END
		ELSE
		BEGIN
			INSERT dbo.SecurityChangeLog(LoginName,UserName,DatabaseName,SchemaName,ObjectName,ObjectType,DDLCommand) 
			SELECT CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/LoginName)')), 
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/UserName)')),
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/DatabaseName)')),
				CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/SchemaName)')),
			    CONVERT(SYSNAME, @message_body.query('data(/EVENT_INSTANCE/ObjectName)')),
		        CONVERT(VARCHAR(50), @message_body.query('data(/EVENT_INSTANCE/ObjectType)')),
		        CONVERT(VARCHAR(MAX), @message_body.query('data(/EVENT_INSTANCE/EventType)')) + ' ' + 
				CONVERT(VARCHAR(MAX), @message_body.query('data(/EVENT_INSTANCE/RoleName)')) + ' FOR ' +
				CONVERT(VARCHAR(MAX), @message_body.query('data(/EVENT_INSTANCE/LoginType)')) + ' ' +
				CONVERT(VARCHAR(MAX), @message_body.query('data(/EVENT_INSTANCE/ObjectName)'))
		END
       END
    END
END

 

Last step is modifying the queue so that it will use the stored procedure and starts tracking the login and user changes.

ALTER QUEUE SecurityChangeQueue
WITH ACTIVATION
(
   STATUS = ON,
   PROCEDURE_NAME = dbo.LogSecurityChange,
   MAX_QUEUE_READERS = 1,
   EXECUTE AS OWNER
);
GO

Bring it all together

All the information is now logged on each individual server. I want all the audit log information in one central database so that the customer can run reports on it. First of all, I’ve created a database on one of the SQL Servers and created a table that will store all the audit logs of all the servers.

CREATE TABLE [dbo].[SecurityChangeLog](
	[ChangeLogID] [int] IDENTITY(1,1) NOT NULL,
	[ServerName] [sysname] NOT NULL,
	[LoginName] [sysname] NOT NULL,
	[DatabaseName] [sysname] NOT NULL,
	[ObjectName] [nvarchar](128) NULL,
	[ObjectType] [varchar](50) NULL,
	[DDLCommand] [nvarchar](max) NULL,
	[EventTime] [datetime2](7) NULL,
 CONSTRAINT [PK_ChangeLogID] PRIMARY KEY CLUSTERED 
(
	[ChangeLogID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

 

Next step is retrieving all the log information from each server. I was thinking of using an SSIS package for this but it looked more easily to use Powershell instead. The script below is based on the central management server that I’m using. First I’m retrieving all the SQL Servers where I have to collect the audit data. This is stored in the table msdb.dbo.sysmanagement_shared_registered_servers_internal of the central management server. Make sure that you change YOURGROUPID with the appropriate groupid of your CMS and YOURCMS with the instance name of your CMS.

Once we have the list, I execute a script GetAuditLog.sql against each instance to retrieve the audit information.

The final step will write the retrieved audit information into the database. Make sure you change the YOURDATABASENAME and YOURSQLSERVERINSTANCE in the script to your database name and SQL instance.

Schedule a SQL Agent Job to execute the powershell script on a daily basis.

 

#Set-ExecutionPolicy -ExecutionPolicy Unrestricted
Import-Module “sqlps” -DisableNameChecking
cls
$InputFile = "YOURFILEPATH + GetAuditLog.sql"

#Get the list of servers from the Central Management Server
$InstanceNameList = Invoke-Sqlcmd -Query "select server_name from msdb.dbo.sysmanagement_shared_registered_servers_internal 
                                          where server_group_id=YOURGROUPID" -ServerInstance "YOURCMS"
                                         
Clear-variable -Name results
Clear-variable -Name record

#Get Audit information
foreach($instanceName in $instanceNameList) 
{ 
    write-host "Executing query against server: " $instanceName.server_name
    $results += Invoke-Sqlcmd -InputFile $InputFile -ServerInstance $instanceName.server_name
}
#Save the Audit Log into the database
foreach($record in $results)
{
#write-host "ServerName: " $record.ServerName " " $record.DDLCommand
Invoke-Sqlcmd -Query "insert into [YOURDATABASENAME].[dbo].[SecurityChangeLog] ([ServerName],[LoginName],[DatabaseName],[ObjectName],[ObjectType],[DDLCommand],[EventTime])  
                      Values('$($record.ServerName)','$($record.LoginName)','$($record.DatabaseName)','$($record.ObjectName)','$($record.ObjectType)','$($record.DDLCommand)','$($record.EventTime)')" -ServerInstance "YOURSQLSERVERINSTANCE" 
}

 

Below you can find the SQL script that is used to retrieve the audit information in the powershell script. Dependent on the version we will read the audit file or the event notification table. I’m only selecting the logs of the last day because the powershell is scheduled to run on a daily basis.

USE MSDB
GO
declare @AuditLog  table (
LoginName sysname,
DatabaseName sysname,
ObjectName nvarchar(128),
ObjectType varchar(50),
DDLCommand nvarchar(4000),
EventTime datetime)
declare @LogFilePath varchar(256)
declare @version int
set @version = (SELECT convert (int,REPLACE (LEFT (CONVERT (varchar, SERVERPROPERTY ('ProductVersion')),2), '.', '')))
IF (@version=10 and (select convert(varchar,SERVERPROPERTY('Edition'))) not like '%Standard%') OR (@version=11)
BEGIN
	select @LogFilePath=log_file_path from sys.server_file_audits
	where name='Audit Login Changes'
	IF @LogFilePath IS NOT NULL
	BEGIN
		set @LogFilePath=@LogFilePath +'*.sqlaudit'
		insert into @AuditLog
		SELECT session_server_principal_name,database_name,object_name,m.class_type_desc,statement,event_time FROM sys.fn_get_audit_file (@LogFilePath,default,default) f
		INNER JOIN sys.dm_audit_class_type_map m on m.class_type=f.class_type;
		select serverproperty('Servername') as ServerName,LoginName,DatabaseName,ObjectName,ObjectType,replace(DDLCommand,'''','''''') as DDLCommand,EventTime from @AuditLog
		where EventTime>dateadd(dd,-1,getdate())
	END
END
ELSE
BEGIN
	IF exists (select name from msdb.sys.tables where name='SecurityChangeLog')
		select serverproperty('Servername') as ServerName,LoginName,DatabaseName,ObjectName,ObjectType,replace(DDLCommand,'''','''''') as DDLCommand,EventTime from msdb.dbo.SecurityChangeLog
	    where EventTime>dateadd(dd,-1,getdate()) and DDLCommand<>''
END

Report

To make it easy for the customer to retrieve the data, I’ve created this report.

Security ChangesI know that this solution is not really secure because people can just stop the audit, disable the queue or delete the records from the tables but I think this is a good start to track the login and user modifications.

Have fun!

It’s hard to convince clients to use SQL 2014

SQL Server 2014 has been released since the 1st of April and there are some really cool features that might help you and your organization and can definitely justify an upgrade. In my job as a SQL consultant, I have a lot of different customers and none of them is currently working on SQL Server 2014 or is planning to upgrade to this new version. I find it really hard to convince my clients to upgrade or even install it for a brand new installation.

Why is it so hard to convince people to start using SQL Server 2014…? After questioning my customers I figured out these 5 reasons?

I don’t trust it

This is probably the most famous argument of my customers. They don’t trust the new version because it has just been released and has probably a lot of bugs. Let’s wait till the first service pack has been released, and then we know it is stable… Most of my clients are telling me that there are just too many risks to install an RTM version of SQL Server. “What happens if something happens and we’re not prepared for it?” I don’t think Microsoft will release a version of SQL Server that is not stable or not fully tested. It’s true that in the beginning of a new release, more bugs are found, but I wouldn’t state that the new release is unstable. Microsoft is still releasing Service Packs and Cumulative Updates for older versions of SQL Server. Does this mean that this software is unstable? Well.., no…, because you’re probably using it right now. Bug fixes are always part of the software despite if it’s a new or old version.

Oh, is there a new release?

I’ve noticed that many people don’t know that a new version has been released. They just install the older version of which they know of. Or they have heard about a new release but are not aware of all the new cool functionalities that can be a benefit for their company. And that is where we, the SQL Family, come in. We are trying to make people aware of all these new features with SQL events, webinars, workshops, blogs,…

My vendor doesn’t support it yet…

Another reason that I hear a lot is that the vendor of a software package (which is using a SQL database), does not support the new version yet. I even had to install a brand new SQL Server 2005 last month because the vendor didn’t support another version… I told my customer to get rid of that software and find a good alternative. The vendor’s software will probably work on SQL Server 2014 but if you should use it, you will lose your full support on the software… Try the upgrade advisor and see if there is some exotic configuration that will not be working anymore. I doubt it… I usually send the report to my customer and the supplier. And I’ve noticed lately that suppliers are giving me a good response that they will support SQL Server 2014 in a next major release.

It’s too expensive

Upgrading to a new version of SQL Server is a project that requires some investments. Not only the license cost but also new hardware, re-coding your application, testing … Many customers are complaining that the enterprise edition has become really expensive but do they really need the enterprise edition? From SQL Server 2008 onwards, you can use the DMV sys.dm_db_persisted_sku_features to view all enterprise edition-specific features that are enabled in the current database. If they do need the enterprise edition, it might worth starting a consolidation project instead of and upgrade project to save license costs.

The upgrade can also save you money because new features can save resources like page and row compression, resource governor or the new T-SQL improvements can save your developers a lot of time.

No time, too much work

I hear this a lot… “Upgrade…? Sorry but we don’t have time for that… Why should we upgrade, the database is working just fine and we have other more important things on our agenda.” I don’t think that it has something to do with “no time”. Some people are just not motivated to do an upgrade. It’s hard to convince people that are not willing to learn and are not open to new ideas. And maybe that’s because of the above mentioned point, they don’t trust the software and it will cost the company a lot of money.

I don’t know if I’m the only one that is experiencing this is :) ? But I hope that I can change the perception of my clients … but it’s a tough job… believe me…

How to change the database owner for the secondary replica database

When you perform a restore of a database to a SQL Server Instance where the database doesn’t exist, the owner of the database is changed to the user that is executing the restore. Actually, this also the case for attaching a database. The ownership can give the login extra permissions which in some cases you don’t want. A good rule of thumb is to choose “sa” as the database owner. This can be easily done by executing the following query after the restore is done.

ALTER AUTHORIZATION ON DATABASE::<YourDatabaseName> to sa;

Just keep in mind, when you change the owner of the database that the user that owned the database might have less or no permission anymore on the database.

Unfortunately, if you restore a database as read-only or if you leave it into restoring mode for AlwaysOn Availability Group configuration, you cannot change database owner anymore.
Even if you setup an AlwaysOn Avaliability Group with the wizard of the SQL Server management studio, the owner of the database on the secondary replica will be the user that is executing the wizard. Quite obvious because in the background the wizard is performing the restore J.
Your database must be in a read/write mode before you can change the database owner.

If the database is read-only, it can briefly be set to Read/Write for the modification, and then set back. That shouldn’t be an issue.
In case of AlwaysOn Availability Groups, you will have to perform a failover to the secondary replica, change the database owner and failback to the original primary replica.
But, I can imagine that there will be situations where a failover will not be possible, like for example asynchronous commit mode or just the business that doesn’t want downtime.

A possible workaround could be to manually prepare the secondary database for the availability group but perform the restore as the user sa. The following script can be used

EXECUTE AS login='sa'
RESTORE DATABASE [YourDatabaseName] FROM DISK = '<YourBackupPath>' 
REVERT
GO

More information on Manually Prepare a Secondary Database for an Availability Group can be found here

It’s a simple script but it does change the owner of the database to sa and you don’t have to worry about a failover to set the correct database owner.

Pieter

Test your scripts against different database states

Probably you all have created some scripts that were just working fine on your instance but they didn’t work on production. I’ve seen a lot scripts that were running just fine as long as all the databases have a status ONLINE. But as soon as a database has another state, the scripts start to fail… The possible database states are:

    • ONLINE
    • OFFLINE
    • RESTORING
    • RECOVERING
    • RECOVERY PENDING
    • SUSPECT
    • EMERGENCY

More information on the database states can be found on http://technet.microsoft.com/en-us/library/ms190442.aspx. You should handle the different database states in your scripts and test them against all different database states. Now… the challenge is to force your database in a particular state. In this blog post, I’m going to explain how you can create test databases with a different state.

ONLINE

That’s the easy part. When you create a new database, it will automatically become ONLINE

CREATE DATABASE ONLINEDB

OFFLINE

When you want to create an OFFLINE database you first have to create the database and run the ALTER DATABASE statement to bring it OFFLINE.

CREATE DATABASE OFFLINEDB
GO
ALTER DATABASE OFFLINEDB SET OFFLINE
GO

You can also bring a database offline with the GUI of the SQL Server Management Studio.

RESTORING

To create a database in RESTORING state, create a new database, take a FULL BACKUP of the database and RESTORE the database with the option NORECOVERY. Your database will not come ONLINE but it will stay in RESTORING mode.

CREATE DATABASE RESTORINGDB
GO
BACKUP DATABASE RESTORINGDB TO DISK=<YourBackupPath>RESTORINGDB.bak' WITH COPY_ONLY
GO
RESTORE DATABASE RESTORINGDB
FROM DISK='<YourBackupPath>RESTORINGDB.bak'
WITH REPLACE,NORECOVERY
GO

RECOVERING

This is a temporary state. Normally, a database is only “In Recovery” mode during startup. So, when SQL Server starts up the database or when you bring it back ONLINE from an OFFLINE state. I haven’t found a possible method to simulate a database that stays in RECOVERING mode…

RECOVERY PENDING

Follow these steps to create a database with a RECOVERY PENDING state

  • Create an empty database
  • Stop the SQL Server Service
  • Rename or remove the database transaction log file of the database
  • Restart the SQL Server Service

You will notice that the database will now have a Recovery Pending state

SUSPECT

I followed this blog post of Paul Randal (blog | twitter) to create a SUSPECT database. However, my database always turned in Recovery Pending state and not SUSPECT… In the blog post, Paul is performing and update in an uncommitted transaction. I changed it into an insert of an extra colleague (myself ;)) and stopped the instance. I opened the log file with the same XVI32 tool and searched for my name and changed that section. When I restarted the SQL Server instance, the engine was unable to perform a rollback of my transaction and the database finally got into a SUSPECT state. Paul describes in this article the differences between SUSPECT and RECOVERY PENDING.

EMERGENCY

When you want to create a database that needs to go into EMERGENCY mode, you first have to create the database and run the ALTER DATABASE statement.

CREATE DATABASE EMERGENCYDB
GO
ALTER DATABASE [EMERGENCYDB] SET EMERGENCY
GO

OTHER DATABASE OPTIONS

Apart from the above mentioned states, I would also recommend to create the following test databases.

  • Read Only –  ALTER DATABASE … SET READ_ONLY
  • Restricted User - ALTER DATABASE … SET RESTRICTED_USER
  • Single user - ALTER DATABASE … SET SINGLE_USER
  • A database with name DB_EX@T!ç-$&ù!&.,.;=+#

Well, start testing your scripts and see how many of them could be improved.

Moving transaction log files

One of my customers took my advice and finally wanted to move the transaction log files to a separate disk. Probably, the current disk space issues on that server helped here a little bit as well ;)
The best way to move database and log files to a new location is by specifying the new file location in the FILENAME clause of the ALTER DATABASE statement.
To move a file of a database, follow these steps:

  1. Set the databases offline
  2. Move the file or files to the new location
  3. Alter the database for each file that has been moved
  4. Bring back the database online

This method works really well for a single database but if you have to repeat it twice, you can automate it.
And in my case, the customer wanted to move the transaction log files for all the databases (230) in 1 time.

The first part of the script will collect the current physical paths and file names of the transaction log files and will store it into a table variable.
Don’t forget to specify the parameter @NewPath which should contain the new path for your transaction log files.
Secondly, a loop will go over all the databases that are ONLINE and will execute the 4 actions as described above. With the “ONLINE” option, you avoid bringing OFFLINE databases back online.
If the move was successful, the script will delete the old transaction log file. I’m using xp_cmshell to perform the COPY and the DEL (I know…). If you should know any better solution, just drop me a comment!
Keep also in mind that this script only works within the same SQL Server Instance.

SET NOCOUNT ON
GO
declare @DatabaseFileLocation as table  (
[Databaseid] int,
[DBName] sysname,
[FileType] int,
[name] varchar(256),
[physical_name] varchar(1024)
)

DECLARE @sqlcmd nvarchar(1024), @Newpath nvarchar(1024),@ID int

--Specify the new transaction log file folder
set @Newpath='E:'
set @ID=4

--Collect the current physical name of the transaction log file
SET @sqlcmd = N'use [?]
select DB_ID(), DB_NAME(), type, name, physical_name from sys.database_files where type=1'

Insert into @DatabaseFileLocation
exec sp_MSForEachDB @sqlcmd

while 0=0 
begin
	select @ID=min(DATABASE_ID) from sys.databases where database_id&gt;@ID and state_desc='ONLINE'
	if @@rowcount=0 or @ID is null break
	begin
		--Set database OFFLINE
		select @sqlcmd= N'ALTER DATABASE [' + name + '] SET OFFLINE WITH ROLLBACK IMMEDIATE' from sys.databases
		where database_id=@ID
		print @sqlcmd
		exec sp_executesql @sqlcmd
		--COPY the transaction log file to the the new path
		SELECT @sqlcmd= 'COPY ' + physical_name + ' ' + REPLACE([physical_name],SUBSTRING(physical_name,0,LEN(physical_name)-CHARINDEX('&#092;',REVERSE(physical_name))+1),@Newpath) from @DatabaseFileLocation where Databaseid=@ID
		print @sqlcmd
		exec master..xp_cmdshell @sqlcmd
		--ALTER the database so that it will point to the new transaction log file location
		select @sqlcmd='ALTER DATABASE [' + DBName + '] MODIFY FILE ( NAME = ['+ name + '], FILENAME = ''' + REPLACE([physical_name],SUBSTRING(physical_name,0,LEN(physical_name)-CHARINDEX('&#092;',REVERSE(physical_name))+1),@Newpath) + ''')' from @DatabaseFileLocation where Databaseid=@ID
		print @sqlcmd
		exec sp_executesql @sqlcmd
		--Set database ONLINE
		select @sqlcmd= N'ALTER DATABASE [' + name + '] SET ONLINE' from sys.databases
		where database_id=@ID
		print @sqlcmd
		exec sp_executesql @sqlcmd
		--If all went well, remove the old transaction log files
		SELECT @sqlcmd= 'DEL ' + physical_name  from @DatabaseFileLocation where Databaseid=@ID
		print @sqlcmd
		exec master..xp_cmdshell @sqlcmd
	end
end

DBA Morning Checklist with PBM (Part 3)

A few months ago, I blogged about how I did my morning checklist by using Policy Based Management together with a Central Management Server. This method works really well for me and I’ve implemented it at all my customers for the moment. When the months passed by, I’ve added extra checks to my list and created the necessary policies for it. This blog post gives a short description how my full morning DBA checklist looks like.

All jobs succeeded

Are there any jobs that have failed the last 7 days? I’m using 7 days here because I visit my customers mostly on a weekly bases. You can of course change that period in the condition that is checked in the policy.

Check SQL Server services are running

This policy will check if all SQL Server services are still running. Sounds ridiculous but it happened to me a few times that the SQL Server Agent was stopped without anybody noticing. This check only works from SQL Server 2008 R2 SP1 onwards because I’m using the DMV sys.dm_server_services, which is available since that version.

Data and Log files on separate volumes

For best performance, your data and log files should be stored on separate volumes. In the condition, I’m just checking the @DataAndLogFilesOnSeparateLogicalVolumes parameter in the Database Performance facet.

Data File Growth in MB

The autogrowth setting of your data file should be set MB instead of %. I’m also verifying if the growth setting is at least 128 MB…

Database Auto Close

You should never set the auto close parameter to True on your database. This option will cause performance degradation because of the overhead of opening and closing the database after each connection. The policy will check if there is any database on the server that has auto close set to True.

Database Auto Create Statistics

This policy will check if the Auto Create Statistics database setting is set to True. In some cases, like on some SharePoint databases, you have to ignore this check because the statistics are handled by SharePoint.

Database Auto Shrink

This rule checks whether the auto shrink database option is set to False. This setting should never be set to True because it will lead to physical fragmentation.

Database Auto Update Statistics

This policy will check if the Auto Update Statistics database setting is set to True. In some cases, like on some SharePoint databases, you have to ignore this check because the statistics are handled by SharePoint.

Database Collation

Is the database collation the same as the collation of my master database? The condition that is checked is the @CollationMatchesModelOrMaster parameter in de Database Performance facet.

Database Compatibility Level

Database compatibility level should be equal to the server level. The condition is going to check the @CompatibilityLevel database parameter against the collation of the master database that I retrieve by querying the sys.databases.

Database DBCC CHECKDB last 7 days

Is a DBCC CHECKDB performed the last week on the databases? I’m choosing 7 days here because this is how I schedule the DBCC command at my customers. The condition is checked based on a T-SQL query that is using the DBCC DBINFO() information.

Database is in Normal State

The status of the database should be in a normal state. This policy will indicate a failure on offline, suspect, … status of the database

Database owner

This policy will check if the owner of my database is set to sa.

Database page verify

Database option Page Verify should be set to CHECKSUM.

Free disk space

This policy is going to check if there is any LUN on the server that has less than 10% free disk space. The check itself is done by a powershell command or if powershell is not installed on the server by the fsutil command. The script also supports mount points.

Full or Differential Backup

Is there a full backup taken of my database during the last 24 hours? Or… Is there a differential backup taken of my database during the last 24 hours and a full backup during the last 7 days. Some of the database are too big to take a full backup every day. This check is going to verify is there is a differential backup. I’ve planned a full backup every week, next to the differential backups. You can change the condition of course, according to you backup strategy.

Jobs owned by sa

Are my jobs owned by the sa user? In the condition, I’ve filtered on the category Jobs from MSX, Database Maintenance, Full-Text and Uncategorized. There is no server policy created for this, so I’m using a T-sql script based on the sysjobs table.

Log File Growth in MB

The autogrowth setting of your log file should be set MB instead of %. I’m also verifying if the growth setting is at least 128 MB and not 1 MB in some cases…

Recovery Model = FULL for Production Databases

The policy is going to check if the recovery model of production databases is set to full. By default, I’m always using a full recovery model for production databases unless the application owner tells me that simple should be fine. I had to find a way to specify if a database exists on a production server or not? I’ve used a simple trick by specifying an Extended Property “Environment” on the master database.

In the policy, I’ve specified a server restriction based on the extended property.

Severity Alerts

Were there any severity Alerts in the SQL Server Log with a value larger than 16 during the past 7 days? When your SQL Server Log file is really big, it might happen that this policy returns a timeout.

Transaction Log Backup Taken

Is there a transaction log backup taken of my database during the last 2 hours? Again, you can change this time period depending on the backup strategy that you’re using.

All these checks give a me nice overview how healthy my SQL environment is, in no time!

You can download all my policies here

In case you should have any issues or you want to share some extra policies, just let me know!

Pieter