Piecemeal restore of a partitioned table

A few weeks ago I presented my Advanced Backup and Restore session on SQL Rally in Denmark.
One of the topics that I showed was the piecemeal restore functionality. This allows you to restore your database in “pieces”. The files or filegroups that have been restored are already accessible while the others are still restoring. Jes Borland (blog | twitter) has made a good blog post on this topic.
During the session, I got a question if it was possible to restore filegroups that were used by a partioned table, which should result that the table would be partially available.
My answer to that question was that you had to restore all the files before the table becomes available, but I figured out that my answer was not correct.

With this blog post I want to correct my answer.

Setup the test scenario

For testing purposes, I created a database, MyDatabase, with 5 filegroups: PRIMARY, Partition1, Partition2, Partition3 and Partition4.

Once the database was created I added my partition function and partition scheme.
As you can see, the partition function has 3 boundaries (1, 100, 1000) and the partition scheme is created that specifies the filegroups to hold each one of the four partitions.

CREATE PARTITION FUNCTION myRangePF1 (int)
AS RANGE LEFT FOR VALUES (1, 100, 1000);
GO
CREATE PARTITION SCHEME myRangePS1
AS PARTITION myRangePF1
TO (Partition1, Partition2, Partition3, Partition4);

Next step was creating a partitioned tabled based on the partition scheme

--Create Partitioned table
create TABLE PartitionedTable 
	(PartitionColumnID int not null,
	Name varchar(100) not null,
CONSTRAINT [PK_PartitionColumnID] PRIMARY KEY CLUSTERED 
(
	PartitionColumnID ASC	
) ON [myRangePS1]([PartitionColumnID])	
)ON myRangePS1(PartitionColumnID);

And fill the table with 1000 records

declare @i as int
set @i=1
while @i<=1000
	begin
		insert into PartitionedTable
		select @i,'Name' + CAST(@i as varchar)
		set @i=@i+1
	end

The query below give me the number of records per partition

SELECT $PARTITION.myRangePF1(PartitionColumnID) AS Partition, 
COUNT(*) AS [COUNT] FROM PartitionedTable
GROUP BY $PARTITION.myRangePF1(PartitionColumnID)
ORDER BY Partition ;
GO

Restore the partitioned table

Before I can start with a piecemeal restore, I need to have a backup of course of my database. Don’t forget to change the <BackupPath> in the script.

--Take a new FULL BACKUP
BACKUP DATABASE [MyDatabase]
TO DISK = N'<BackupPath>MyDatabase.bak' WITH COMPRESSION,INIT
GO
BACKUP LOG [MyDatabase] TO DISK=<BackupPath>MyDatabase_Log.trn' WITH FORMAT,INIT,COMPRESSION
GO

Ok, let’s drop the database and begin with the piecemeal restore

USE [master]
GO
ALTER DATABASE [MyDatabase] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
DROP DATABASE [MyDatabase]
GO

First of all, I’m going to restore the PRIMARY filegroup so that my database is already available and “ONLINE” again.

RESTORE DATABASE [MyDatabase] 
FILEGROUP = 'PRIMARY'
FROM DISK = N'<BackupPath>MyDatabase.bak'
WITH PARTIAL, NORECOVERY 
GO
RESTORE LOG [MyDatabase] FROM DISK='<BackupPath>MyDatabase_Log.trn' WITH RECOVERY
GO

When I check the status of my filegroups, you can see that my PRIMARY file is already ONLINE

SELECT [name], [state_desc] 
FROM [MyDatabase].sys.database_files;
GO

Secondly, I restore the Partition1 filegroup. Don’t forget to always apply the transaction log backup.

USE master;
GO
RESTORE DATABASE [MyDatabase] 
FILEGROUP = 'Partition1'
FROM DISK = N'<BackupPath>MyDatabase.bak'
WITH NORECOVERY
GO
RESTORE LOG [MyDatabase] FROM DISK='<BackupPath>MyDatabase_Log.trn' WITH RECOVERY
GO

When I checked the status of my filegroups again, the Partition1 filegroup is now ONLINE

I know that in this filegroup, there is 1 record stored (see “the number of records per partition”), so let’s see if I can query the table

use [MyDatabase]
GO
select PartitionColumnID,Name from PartitionedTable
GO

Well, I get an error that the second partition cannot be accessed, but if you look at the Results, the query returned 1 record!

I restored the Partition2 filegroup

USE master;
GO
RESTORE DATABASE [MyDatabase] 
FILEGROUP = 'Partition2'
FROM DISK = N'<BackupPath>MyDatabase.bak'
WITH NORECOVERY
GO
RESTORE LOG [MyDatabase] FROM DISK='<BackupPath>MyDatabase_Log.trn' WITH RECOVERY
GO

And executed the same query. I got again an error that Partiton3 filegroup was not accessible but the query returned 100 rows. This is correct because I know that there were 100 records in my first 2 partitions of the table.

Of course, if you’re using the correct where-clause and only query the ONLINE filegroups, you will not get any errors at all.

--Try to select the tables
use [MyDatabase]
GO
select PartitionColumnID,Name from PartitionedTable where PartitionColumnID<=100
GO

Finally, I also restored Partition3 and 4 and my table was completely available again.

Conclusion

My answer during my session was not correct. It is possible to restore a partitioned table in pieces and make part of the table available while the other filegroups are still in recovering state.
This can be really helpful to improve your RTO. Instead of restoring the complete table in 1 time, you can restore only the partitions that you really need to get your application back up and running.
The less important partitions, that for example only contain history data, can be restored in a later stage.

Well, as always, you’re never too old to learn something!

Speaking at PASS SQLRally Nordic 2015

PASS SQLRally Nordic is back – bigger than ever! And it looks like they are going to be right! Well, if you look at the agenda, it will definitely be a top event!
There are a lot of international top speakers like Brent Ozar, Grant Fritchey, Itzik Ben-Gan and many more. But also other top industry speakers from across the Nordic region and the world for 3 days of SQL Server and BI training.
The event will be held on March 2-4, 2015, at Tivoli Congress Center in Copenhagen, Denmark. I’ve never been in Copenhagen before. So, I’m really looking forward to visit the city and catch a glimpse of the Little Mermaid.

I’m really happy and proud that I was selected as one of the speakers!
I’m going to talk about Advanced Backup and Restore. Most DBA’s know the standard options for implementing a basic backup and restore plan. But there are also not so well known options that can help you with a more advanced recovery plan and that will increase the speed of your backups and restores. My session is bulk loaded with demos and it will give you a good idea what infrequently features can be helpful in your environment, like partial backup, backup encryption, piecemeal restore … You will also learn how you can improve the performance of your backups! My timeslot is on the third day, Wednesday, March 4, from 12:45am – 01:45pm. The full agenda of the SQL Rally Nordic can be found here.

Well, I hope to see you all in Copenhagen in March!

Pieter

Restore System Databases

Did you ever had to restore your system databases? It’s not that difficult when the databases are still available (or at least some version of it). However, what do you need to do when all system databases are gone?
For example when your system database LUN is missing J. Now, Books Online is not really explaining this in detail so I’ve made this blog post to make it clearer.

Before you read on, please make sure that you have a full backup of all your system databases.

Restore MASTER database

SQL Server requires the master database before a SQL Server instance can run at all. Some version of a master database must exist. Without that “version” the service will not start and it will be impossible to restore the backup.
So, the first step is to rebuild the system databases.

To rebuild the system databases, you can follow these steps on http://msdn.microsoft.com/en-us/library/dd207003.aspx

Insert the SQL Server installation media into the disk drive or mount the iso-file.

From a command prompt window (run as administrator), enter the following command.

Setup /QUIET /ACTION=REBUILDDATABASE /INSTANCENAME=<InstanceName> /SQLSYSADMINACCOUNTS=<accounts>

<InstanceName> Is the name of the instance of SQL Server. For the default instance, enter MSSQLSERVER.
<accounts> Specifies the Windows groups or individual accounts to add to the sysadmin fixed server role.

When the rebuild is finished, you will find new (empty) data and log files in the data folder of the instance.

Once you have the “dummy” master.mdf and ldf file in place, you need to start up the instance in single user mode. This is done by adding the –m startup parameter. Open the SQL Server Configuration Manager to add it. (see screenshot below)

In the SQL Server Configuration Manager, start the SQL Server service. Once it is started we finally can restore the master database. The example starts sqlcmd and executes a RESTORE DATABASE statement that restores a full database backup of master from a disk device.

The REPLACE option will replace the current master database with the one in the backup file. Also notice that after the restore, the instance is shutting down. Remove the –m option again from the startup parameters before you restart SQL Server. You will see that all your user databases are back online and that all your server logins exist.

Restore MASTER database on a failover cluster

The procedure is slightly different than the restore of a stand-alone instance. The rebuild of the system databases must be performed on the active node and the SQL Server resource in the corresponding cluster application group must be taken offline before performing the procedure. I’m trying to restore the master database on a Windows 2012 R2 Failover Cluster and I’m running SQL Server 2012 SP2 Enterprise Edition.

I started with a rebuild of the system databases and started up the SQL Server instance with –m parameter (single user mode), as described above.

To restore the master database, I open a SQL CMD but I always get the error that only one administrator can connect to the instance. The problem is that as soon as I startup the service, the sp_server_diagnostics process kicks in and takes the first connection…
After searching for a while, I figured out that you have to startup a cluster instance in single user mode in a different way. More information can be found on http://msdn.microsoft.com/en-us/library/ms188236.aspx . Check the Note for Clustered Installations.

Restore MASTER database alternative

Another alternative method to restore the MASTER database could be the following:

  • Startup the instance with the new empty master database after the rebuild of the system databases
  • Restore the original master database as a user database (use a different name)
  • Shut down the service again
  • Remove the master.mdf and ldf files.
  • Rename the restored master database back to master.mdf and mastlog.ldf
  • Restart the service

The method above is also working but the above mentioned procedures are the best practices for SQL Server.

Restore MODEL database

The model database can be restored as a normal database. However, if the database should be corrupt, you have to start up the SQL Server Instance with trace flag –T3608. It prevents SQL Server from automatically starting and recovering any database except the master database.

To activate the trace flag, open the SQL Server Configuration manager and add it to the startup parameters.

(Re)start your SQL Server Instance. Do not forget to remove this trace flag after the model database has been restored.

Open the SQL Server Management Studio and restore the original mode database from the backup

Restore MSDB database

Before you can restore the MSDB, you’ll have to stop the SQL Server Agent. You can do this with the SQL Server Configuration manager or just right click on the agent in the SSMS and click stop

The MSDB database can be restored like a normal user database.

Once the restore is finished, you can restart the SQL Server Agent and you should see all your jobs again in the SSMS.

Restore Tempdb database

You cannot backup or restore the tempdb. This database is re-created every time an instance of SQL Server is started. When the server instance is shut down, any data in tempdb is deleted permanently.

If your tempdb is corrupt, just restart your SQL Server Instance. In case, your disk is corrupt, you will have to move your tempdb. More information on how to move system databases can be found here http://msdn.microsoft.com/en-us/library/ms345408.aspx

Restore RESOURCE database

It is a hidden database. The Resource database resides in the mssqlsystemresource.mdf file, which contains only code. Therefore, SQL Server cannot back up the Resource database. It can be restored by a file-level restore in Windows or by running the setup program for SQL Server.

 

HAPPY RESTORE!

Remove Backup files from Azure Storage Part 2

In April this year I’ve posted a script to delete your backup files from Windows Azure Storage.
This week, I’ve got a question from Mark, how you could use this script in a SQL Agent Job on an Azure VM. I mentioned in my previous blog post that you should download the Azure PowerShell cmdlets before you could use the script. But Mark always got the following error when he was trying to download the installation file.


In this blog post I will explain more into detail how you can configure the script correctly in a SQL Server Agent job and how you should install the Azure Powershell Cmdlets.

Startup your VM on Azure and connect to the instance with the SSMS.
Before we install anything extra on the VM, let’s configure a job with the Removal Script that you can find in my previous post. Don’t forget to change the storage account, the access key and the container in the script into your configuration.
Your job step should look like this. Choose the Type=Powershell and not T-SQL.


When you execute the job manually, you will probably get the following error message The term ‘New-AzureStorageContext’ is not recognized as the name of a cmdlet…


This is because we didn’t install the Azure PowerShell cmdlets yet. You can download the cmdlets here. Now, when you try to download this on the Azure VM, you will get the following error:


This is because your browser settings don’t allow you to download files. To fix this, open your Internet Explorer settings and select Internet Options


Select the security tab and click the button Custom level

Scroll down to Downloads and select the option Enable under “File download”


Click OK, close the settings and restart your browser. Now you should be able to download the file to install the Azure PowerShell Cmdlets


Save it and Run the file

First of all you will get this nice welcome screen…

Wait until it is finished and you see the next screen.
Just click Install

You will see that it will install the Microsoft Azure Powershell. Click “I accept” in the next screen

…and wait till the installation is finished

Click finish at the end

In the final screen you should actually see that the Azure Powershell Cmdlets are installed

Ok… that’s it!!!

Go back to the SQL Agent Job and execute it again. Normally the job should execute successfully!

To be 100% sure that it’s working, go to the Management portal of Windows Azure and verify the container if the files have been removed.

Have fun with it!

Faster migration of your SQL 2005 databases

Many DBA’s are trying to get rid of the old SQL Server 2005, which is a good thing.
I’ve been working on some consolidation and migration projects and I’ve found a new and faster way to migrate SQL 2005 databases.

Most easy way to migrate a database is to backup the database, copy it to the new server and restore it. But, how can you make this process faster? Check out this blog post of Jes Schultz Borland (blog | Twitter). Compression of the backup file can make the migration go faster. The backup will be faster because SQL Server has to write less to disk. The copy will be faster because the backup files will be much smaller and the restore will be faster because SQL Server has to read less from the disk. However, there is one little problem… SQL Server 2005 doesn’t support backup compression. This is available from SQL Server 2008 Enterprise Edition or from SQL Server 2008 R2 Standard onwards.

But now, there is a way to compress your SQL 2005 backup files, without using any third party tool. Since a few months, Microsoft introduced a tool called “SQL Server Backup to Azure Tool”. They created it to have a consistent backup solution. In SQL Server 2014 the backup to Azure, encryption and compression are embedded in the engine, but these features are not available in older versions of SQL Server (except for compression of course). The Backup to Azure Tool allows you to use the same functionalities for older versions of SQL Server, starting from SQL 2005 onwards. The good thing about the tool is that you don’t need to use Azure functionality to use encryption or compression. This means that we can compress our on premise SQL 2005 databases and make our migration faster!

The tool can be downloaded from http://www.microsoft.com/en-us/download/details.aspx?id=40740 and is pretty straight forward to install. This tool is supported on SQL Server 2005 or later, and Operating System versions: Windows Server 2008 or later for Servers, and Windows 7 or later for Client Operating Systems.

Once you start up the tool you have to configure some rules.

Click the Add button and start by specifying the path where you are going to save you backup files and the file name pattern:

I don’t want to use the Azure functionality because I’m using the tool for migration purposes to another on premise server.
Select Use local Storage

Select the option “Enable compression”.

Now you have setup the rule that will be necessary to compress local backups.

I did a few tests to see how well the Backup To Azure Tool will compress my backup. I’ve used a 130 GB database and I did the tests on my laptop (16GB RAM and 8 CPU’s).
I’m using SQL Server 2012 for my tests because I wanted to compare the difference between normal compression and Backup To Azure Tool compression

Test Duration MB/sec Size Backup File CPU usage
Full backup without compression and without Backup to Azure Tool compression 2696 sec. 34 MB/sec 89.3 GB 4 %
Full backup with compression and without Backup to Azure Tool compression 1502 sec 61 MB/sec 16.9 GB 10%
Full backup without compression and with Backup To Azure tool compression 2344 sec 39 MB/sec 13.5 GB 20%
Full backup with compression and with Backup To Azure tool compression 1403 sec 65 MB/sec 16.3 GB 15%

I noticed that the size of the backup file is smaller when I’m using the Backup to Azure Tool compared to the normal backup compression.
On the other hand, the duration is taking almost the same time as a backup without compression. Also, the CPU is 10% higher than a normal backup compression.
But with these results, it will still makes sense to use the tool for faster migrations. You still have to copy less data and the restore will also be much faster because there is less data that has to be read from disk.

Happy migration!

Pieter