Piecemeal restore of a partitioned table

A few weeks ago I presented my Advanced Backup and Restore session on SQL Rally in Denmark.
One of the topics that I showed was the piecemeal restore functionality. This allows you to restore your database in “pieces”. The files or filegroups that have been restored are already accessible while the others are still restoring. Jes Borland (blog | twitter) has made a good blog post on this topic.
During the session, I got a question if it was possible to restore filegroups that were used by a partioned table, which should result that the table would be partially available.
My answer to that question was that you had to restore all the files before the table becomes available, but I figured out that my answer was not correct.

With this blog post I want to correct my answer.

Setup the test scenario

For testing purposes, I created a database, MyDatabase, with 5 filegroups: PRIMARY, Partition1, Partition2, Partition3 and Partition4.

Once the database was created I added my partition function and partition scheme.
As you can see, the partition function has 3 boundaries (1, 100, 1000) and the partition scheme is created that specifies the filegroups to hold each one of the four partitions.

CREATE PARTITION FUNCTION myRangePF1 (int)
AS RANGE LEFT FOR VALUES (1, 100, 1000);
GO
CREATE PARTITION SCHEME myRangePS1
AS PARTITION myRangePF1
TO (Partition1, Partition2, Partition3, Partition4);

Next step was creating a partitioned tabled based on the partition scheme

--Create Partitioned table
create TABLE PartitionedTable 
	(PartitionColumnID int not null,
	Name varchar(100) not null,
CONSTRAINT [PK_PartitionColumnID] PRIMARY KEY CLUSTERED 
(
	PartitionColumnID ASC	
) ON [myRangePS1]([PartitionColumnID])	
)ON myRangePS1(PartitionColumnID);

And fill the table with 1000 records

declare @i as int
set @i=1
while @i<=1000
	begin
		insert into PartitionedTable
		select @i,'Name' + CAST(@i as varchar)
		set @i=@i+1
	end

The query below give me the number of records per partition

SELECT $PARTITION.myRangePF1(PartitionColumnID) AS Partition, 
COUNT(*) AS [COUNT] FROM PartitionedTable
GROUP BY $PARTITION.myRangePF1(PartitionColumnID)
ORDER BY Partition ;
GO

Restore the partitioned table

Before I can start with a piecemeal restore, I need to have a backup of course of my database. Don’t forget to change the <BackupPath> in the script.

--Take a new FULL BACKUP
BACKUP DATABASE [MyDatabase]
TO DISK = N'<BackupPath>MyDatabase.bak' WITH COMPRESSION,INIT
GO
BACKUP LOG [MyDatabase] TO DISK=<BackupPath>MyDatabase_Log.trn' WITH FORMAT,INIT,COMPRESSION
GO

Ok, let’s drop the database and begin with the piecemeal restore

USE [master]
GO
ALTER DATABASE [MyDatabase] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
DROP DATABASE [MyDatabase]
GO

First of all, I’m going to restore the PRIMARY filegroup so that my database is already available and “ONLINE” again.

RESTORE DATABASE [MyDatabase] 
FILEGROUP = 'PRIMARY'
FROM DISK = N'<BackupPath>MyDatabase.bak'
WITH PARTIAL, NORECOVERY 
GO
RESTORE LOG [MyDatabase] FROM DISK='<BackupPath>MyDatabase_Log.trn' WITH RECOVERY
GO

When I check the status of my filegroups, you can see that my PRIMARY file is already ONLINE

SELECT [name], [state_desc] 
FROM [MyDatabase].sys.database_files;
GO

Secondly, I restore the Partition1 filegroup. Don’t forget to always apply the transaction log backup.

USE master;
GO
RESTORE DATABASE [MyDatabase] 
FILEGROUP = 'Partition1'
FROM DISK = N'<BackupPath>MyDatabase.bak'
WITH NORECOVERY
GO
RESTORE LOG [MyDatabase] FROM DISK='<BackupPath>MyDatabase_Log.trn' WITH RECOVERY
GO

When I checked the status of my filegroups again, the Partition1 filegroup is now ONLINE

I know that in this filegroup, there is 1 record stored (see “the number of records per partition”), so let’s see if I can query the table

use [MyDatabase]
GO
select PartitionColumnID,Name from PartitionedTable
GO

Well, I get an error that the second partition cannot be accessed, but if you look at the Results, the query returned 1 record!

I restored the Partition2 filegroup

USE master;
GO
RESTORE DATABASE [MyDatabase] 
FILEGROUP = 'Partition2'
FROM DISK = N'<BackupPath>MyDatabase.bak'
WITH NORECOVERY
GO
RESTORE LOG [MyDatabase] FROM DISK='<BackupPath>MyDatabase_Log.trn' WITH RECOVERY
GO

And executed the same query. I got again an error that Partiton3 filegroup was not accessible but the query returned 100 rows. This is correct because I know that there were 100 records in my first 2 partitions of the table.

Of course, if you’re using the correct where-clause and only query the ONLINE filegroups, you will not get any errors at all.

--Try to select the tables
use [MyDatabase]
GO
select PartitionColumnID,Name from PartitionedTable where PartitionColumnID<=100
GO

Finally, I also restored Partition3 and 4 and my table was completely available again.

Conclusion

My answer during my session was not correct. It is possible to restore a partitioned table in pieces and make part of the table available while the other filegroups are still in recovering state.
This can be really helpful to improve your RTO. Instead of restoring the complete table in 1 time, you can restore only the partitions that you really need to get your application back up and running.
The less important partitions, that for example only contain history data, can be restored in a later stage.

Well, as always, you’re never too old to learn something!

Time for a new challenge!

I’m excited to announce that on the 1st of April (not a joke J), I will be joining Microsoft as a Premier Field Engineer SQL Server. After 4,5 amazing years working for Kohera, it was time for a new challenge.
When I started my job as a consultant I knew only a little bit about SQL Server. I started working at a bank in Brussels for 1,5 years where I learned a lot about the daily DBA tasks.
After this “internal” job I really became a consultant because I started working on different shorter projects like consolidations and migrations, SQL audits, performance tunings, teaching customers and many more…
I would like to give a special thanks to Frederik Vandeputte and Koen Jacobs (both managing partners of Kohera) for the chances and opportunities that they gave me. It helped me a lot in improving my SQL skills. Thanks guys for the support and for believing in me!
Another thank you goes to my colleagues. I didn’t see them a lot but when we met, it was always fun to hear their SQL stories.

I’m really looking forward to start working for Microsoft! I’ve waited a long time for this, believe me J.
A PFE delivers Proactive (workshops, health checks, risk assessments) and Reactive (on-site support, troubleshooting) engagements and is a highly skilled engineer with deep technical expertise in a given technology.

More information on Premier Field Role details can be found here or watch the Microsoft Services – Premier Field Engineer Role video.

I still hope to do presentations and events for the SQL Community which I really like doing!

Well… I’ll see you around Europe!

Pieter

 

SQL Server 2012 SP2 CU4 breaks Data Collection

Last week, one of my clients upgraded their SQL Servers to SQL Server 2012 SP2 CU4.
After the upgrade, the internal DBA notified me that the data collection was not showing any Query Statistics anymore in the Management Data Warehouse report.
I verified the logs of the Query Statistics and saw that the Query Statistics Collection ran for a few minutes and then failed with the following errors:

The external columns for ODS – Get snapshot of dm_exec_requests are out of synchronization with the data source columns. The external column “command” needs to be updated.

Truncation may occur due to retrieving data from database column “command” with a length of 32 to data flow column “command” with a length of 16.

The collection never occurred correctly, so the upload to the MDW never happened. That’s the reason why the report was empty.
But why does the collection package starts to fail? Was the DMV dm_exec_requests updated so that the command column size was 32 characters instead of 16? I double checked with BOL and with an older version of SQL Server but the command field always had a length of 32.

Gianluca Sartori (blog | twitter) suggested on Twitter #sqlhelp to modify the SSIS packages to get this fixed. I downloaded the package from the server, modified the command field length to 32 characters and uploaded it again to the server. The QueryActivityCollect package was now running OK but the QuaryActivityUpload package now started to fail with same error on 3 different components (can’t remember which ones). I modified them too and finally I got everything working. … Well… not… Still no data on my report. I’ve noticed that the upload SSIS package was not loading the queries in the [snapshot].[query_stats] table anymore…

I was getting frustrated and stopped with modifying the packages. There must be a better way to fix this! So I came up with the following workaround.

  • I uninstalled the CU4. Yes, you can install SP’s and CU’s. Just go to Programs and Features in the control panel, select the option “View installed updates”, right click on the CU and select uninstall

     

  • Once the CU was uninstalled, my data collection started to work again. I download again the packages from the server and compared them with the CU4 package version.

    And guess what, the packages are changed!

    For example, when I compared the component ODS – Get snapshot of dm_exec_requests

    I get the following in the CU4 version

    While in the CU3 version you get the following:

    So there has definitely be “some” modifications. I’ve also verified the upload package and it’s the same issue.

     

  • I reinstalled CU4 again
  • Stopped the Query Statistics collection

  • I upload the CU3 SSIS package to the server with dtutil, removed all the cache files from the cache folder and restarted the Query Statistics Collection

Finally, my data collection started to work again…
I’ve opened a connect for this. Please vote and let’s hope Microsoft will fix this in the next CU J

Happy collecting!

Pieter

Speaking at SQL Saturday Exeter #372

2 times in a row! Yes, I’m selected again to speak on a very fine SQL Saturday in Exeter!
I had a great time last year and I’m really looking forward to go to the UK again. I’m going by car so if somebody needs a lift, just let me know.
The event will be held on 24th and 25th April 2015 at the Jurys Inn Hotel Exeter, like last year. Really nice hotel, believe me.

This year I’m going to talk about my DBA morning checklist.  In my job as a SQL Consultant, I have my weekly recurring customers. They don’t have a DBA, so I have to perform the daily DBA checklist on a weekly basis ;) Initially, I’ve started with executing some T-SQL scripts to verify the SQL Servers. But as more servers were installed, my DBA checklist became really time consuming and I didn’t even have the time for a coffee… In this session you will learn how you can perform your DBA morning checklist while sipping coffee. I will demonstrate how you can use Policy Based Management to evaluate your servers and how I configured my setup. By the end of this session, I guarantee you will have more time for your morning coffee!

I have the first timeslot, from 08:00 AM till 08:50, so coffee will be necessary ;)
There are of course a lot of other interesting sessions that day.
On Friday, there will be a lot of full day trainings by William Durkin, David Peter Hansen, Steph Locke, Bob Duffy,  Tobiasz Koprowski, John Martin, André Kamman and Richard Douglas. More information and registration can be found here.

And looks like the pirates are coming this year  :D!

Hope to see you all there in April!

Pieter

Speaking at PASS SQLRally Nordic 2015

PASS SQLRally Nordic is back – bigger than ever! And it looks like they are going to be right! Well, if you look at the agenda, it will definitely be a top event!
There are a lot of international top speakers like Brent Ozar, Grant Fritchey, Itzik Ben-Gan and many more. But also other top industry speakers from across the Nordic region and the world for 3 days of SQL Server and BI training.
The event will be held on March 2-4, 2015, at Tivoli Congress Center in Copenhagen, Denmark. I’ve never been in Copenhagen before. So, I’m really looking forward to visit the city and catch a glimpse of the Little Mermaid.

I’m really happy and proud that I was selected as one of the speakers!
I’m going to talk about Advanced Backup and Restore. Most DBA’s know the standard options for implementing a basic backup and restore plan. But there are also not so well known options that can help you with a more advanced recovery plan and that will increase the speed of your backups and restores. My session is bulk loaded with demos and it will give you a good idea what infrequently features can be helpful in your environment, like partial backup, backup encryption, piecemeal restore … You will also learn how you can improve the performance of your backups! My timeslot is on the third day, Wednesday, March 4, from 12:45am – 01:45pm. The full agenda of the SQL Rally Nordic can be found here.

Well, I hope to see you all in Copenhagen in March!

Pieter

Restore System Databases

Did you ever had to restore your system databases? It’s not that difficult when the databases are still available (or at least some version of it). However, what do you need to do when all system databases are gone?
For example when your system database LUN is missing J. Now, Books Online is not really explaining this in detail so I’ve made this blog post to make it clearer.

Before you read on, please make sure that you have a full backup of all your system databases.

Restore MASTER database

SQL Server requires the master database before a SQL Server instance can run at all. Some version of a master database must exist. Without that “version” the service will not start and it will be impossible to restore the backup.
So, the first step is to rebuild the system databases.

To rebuild the system databases, you can follow these steps on http://msdn.microsoft.com/en-us/library/dd207003.aspx

Insert the SQL Server installation media into the disk drive or mount the iso-file.

From a command prompt window (run as administrator), enter the following command.

Setup /QUIET /ACTION=REBUILDDATABASE /INSTANCENAME=<InstanceName> /SQLSYSADMINACCOUNTS=<accounts>

<InstanceName> Is the name of the instance of SQL Server. For the default instance, enter MSSQLSERVER.
<accounts> Specifies the Windows groups or individual accounts to add to the sysadmin fixed server role.

When the rebuild is finished, you will find new (empty) data and log files in the data folder of the instance.

Once you have the “dummy” master.mdf and ldf file in place, you need to start up the instance in single user mode. This is done by adding the –m startup parameter. Open the SQL Server Configuration Manager to add it. (see screenshot below)

In the SQL Server Configuration Manager, start the SQL Server service. Once it is started we finally can restore the master database. The example starts sqlcmd and executes a RESTORE DATABASE statement that restores a full database backup of master from a disk device.

The REPLACE option will replace the current master database with the one in the backup file. Also notice that after the restore, the instance is shutting down. Remove the –m option again from the startup parameters before you restart SQL Server. You will see that all your user databases are back online and that all your server logins exist.

Restore MASTER database on a failover cluster

The procedure is slightly different than the restore of a stand-alone instance. The rebuild of the system databases must be performed on the active node and the SQL Server resource in the corresponding cluster application group must be taken offline before performing the procedure. I’m trying to restore the master database on a Windows 2012 R2 Failover Cluster and I’m running SQL Server 2012 SP2 Enterprise Edition.

I started with a rebuild of the system databases and started up the SQL Server instance with –m parameter (single user mode), as described above.

To restore the master database, I open a SQL CMD but I always get the error that only one administrator can connect to the instance. The problem is that as soon as I startup the service, the sp_server_diagnostics process kicks in and takes the first connection…
After searching for a while, I figured out that you have to startup a cluster instance in single user mode in a different way. More information can be found on http://msdn.microsoft.com/en-us/library/ms188236.aspx . Check the Note for Clustered Installations.

Restore MASTER database alternative

Another alternative method to restore the MASTER database could be the following:

  • Startup the instance with the new empty master database after the rebuild of the system databases
  • Restore the original master database as a user database (use a different name)
  • Shut down the service again
  • Remove the master.mdf and ldf files.
  • Rename the restored master database back to master.mdf and mastlog.ldf
  • Restart the service

The method above is also working but the above mentioned procedures are the best practices for SQL Server.

Restore MODEL database

The model database can be restored as a normal database. However, if the database should be corrupt, you have to start up the SQL Server Instance with trace flag –T3608. It prevents SQL Server from automatically starting and recovering any database except the master database.

To activate the trace flag, open the SQL Server Configuration manager and add it to the startup parameters.

(Re)start your SQL Server Instance. Do not forget to remove this trace flag after the model database has been restored.

Open the SQL Server Management Studio and restore the original mode database from the backup

Restore MSDB database

Before you can restore the MSDB, you’ll have to stop the SQL Server Agent. You can do this with the SQL Server Configuration manager or just right click on the agent in the SSMS and click stop

The MSDB database can be restored like a normal user database.

Once the restore is finished, you can restart the SQL Server Agent and you should see all your jobs again in the SSMS.

Restore Tempdb database

You cannot backup or restore the tempdb. This database is re-created every time an instance of SQL Server is started. When the server instance is shut down, any data in tempdb is deleted permanently.

If your tempdb is corrupt, just restart your SQL Server Instance. In case, your disk is corrupt, you will have to move your tempdb. More information on how to move system databases can be found here http://msdn.microsoft.com/en-us/library/ms345408.aspx

Restore RESOURCE database

It is a hidden database. The Resource database resides in the mssqlsystemresource.mdf file, which contains only code. Therefore, SQL Server cannot back up the Resource database. It can be restored by a file-level restore in Windows or by running the setup program for SQL Server.

 

HAPPY RESTORE!

Maximum Failover Limit

With this blog post, I will try save a few hours of your life!

I’ve recently setup a simple 2 node failover cluster that will be used for AlwaysOn Availability Group. After the installation, I suggested the customer to perform failover testing, not only to see the behavior of the Windows Failover Cluster but also to see how the application(s) would respond to a failover. One of the tests was disabling the Production LAN.
When I disabled the NIC on the node that was hosting the cluster group, it did a nice failover as expected.

I activated the NIC again and did the same test on the other node (which was hosting the cluster group now). To my surprise, there was no failover at all but the cluster name and the IP address just went offline

This is not what I expected.
I checked the cluster events and saw the following error “Cluster role ‘Cluster Group’ has exceeded its failover threshold”

That’s right…, I forgot all about that. The maximum failover limit is set to 1 time in 6 hours.
I knew that you had to change this for testing. This is really good explained in this blog post http://blogs.msdn.com/b/arvindsh/archive/2012/03/09/failover-cluster-group-maximum-failures-limit.aspx .
The blog is using Windows Server 2008, but my installation is using Windows Server 2012 R2. It really took me a long time to figure out how I could change the maximum failover limit of the cluster group in Windows Server 2012…

You have to right click on the cluster name and select the properties. Now if you look closely, there is a “link” in the first windows called “Manage Core Cluster Resource Group”.
I didn’t notice that link because most of the links are just to open the help files…

Go to the Failover tab, and there you will find the Maximum failures in the specified period. By default it is set to 1 in a period of 6 hours. This is not enough during failover cluster testing and will give you the failed/offline issue that I had.
Change it to a higher value, like 100 for example.

Don’t forget to set it back to the default value after you finish your tests

I did a few more tests and it was working fine now!

Well, I hope that I have saved a few hours of your life with this ;)

Limit RESTORE to a single database

A client came to me with the following question: “Is it possible to limit a login to restore only one specific database?”
Apparently, the application that he was trying to upgrade took a backup of the database before the upgrade. In case of failure, the upgrade procedure would automatically restore the backup.
Good idea to restrict that user but I don’t know by hart if that is possible….

Let’s figure it out!

I’ve created a test login “RestoreUser” that isn’t a member of any server role, except Public. The login is linked to a database user that is member of the db_owner role of my Sales database.

Looks like I’m not able to perform a restore of the database with the current user rights.

dbcreator

If we check the server roles, there might be an option to use the dbcreator role. However, if we look at the description…

Members of the dbcreator fixed server role can create, alter, drop, and restore any database.

There might be an issue as we don’t want the user to create or restore other databases.
My test results confirmed this, so dbcreator role … no option…

Custom Server role

The database was running on a SQL Server 2012. Maybe I can create a new server role to put a restriction on the restore and add my login into that group.
The only option that looked interesting was the CREATE and ALTER DATABASE option. No RESTORE database option could be selected and the other roles were not relevant.


When I tried the restore, it didn’t work either. And even if it should have worked, the description mentions “any” database so again the same issue as the dbcreator role…

DDL Trigger

Maybe I can capture the RESTORE command in a DDL Trigger? In the trigger, I could check the user and the database that is been restored.
I’ve checked the DDL Events that are possible and the RESTORE command is not a DDL Event…

DDL trigger…? No option…

Database Owner

If you check the RESTORE command on Books Online and look for the security part, you will see the following:

If the database exists, RESTORE permissions default to members of the sysadmin and dbcreator fixed server roles and the owner (dbo) of the database

We already know that the sysadmin and dbcreator are not an option, but let’s try “the owner of the database” option.

To change the owner of the database, you can execute this command. Of course, change the name of the database and the user to your database name and user.

ALTER AUTHORIZATION ON DATABASE::SalesDB TO RestoreUser;

Well… the restore of my database succeeded!

And if I try to restore another database it didn’t work.

Mission accomplished!

Is it wrong to change the Database Owner?

The database owner gets full privileges on the database itself. So be careful if you’re changing the database owner, that the former database owner can still access the database.
In this post of Brent Ozar, there is a check that the database owner should be sa. But in fact, it’s not wrong to have another database owner than sa.
It’s even recommended to create a dedicated account with zero privileges as the database owner. Check this post of Andreas Wolter on SQL Server Database Ownership.

Now, to come back to my situation. It’s not wrong to have my RestoreUser as the database owner. The user had already full privileges because it was member of the db_owner role.

Conclusion

Changing the database owner will allow the user to restore the just that single database. Other restores and creation of new databases will not be possible.
There are a few restrictions that you have to keep in mind. The database must already exist. If it doesn’t you must have CREATE DATABASE permission to restore the database. As a consequence, the user will be able to create any other database as well.
This option will also not work if you don’t allow database ownership for that user. If that’s not possible, you’re out of options.

I don’t know if there are other possibilities to restrict the user to restore 1 single database. If you do know, just drop me a comment ;)

Happy RESTORE!

Remove Backup files from Azure Storage Part 2

In April this year I’ve posted a script to delete your backup files from Windows Azure Storage.
This week, I’ve got a question from Mark, how you could use this script in a SQL Agent Job on an Azure VM. I mentioned in my previous blog post that you should download the Azure PowerShell cmdlets before you could use the script. But Mark always got the following error when he was trying to download the installation file.


In this blog post I will explain more into detail how you can configure the script correctly in a SQL Server Agent job and how you should install the Azure Powershell Cmdlets.

Startup your VM on Azure and connect to the instance with the SSMS.
Before we install anything extra on the VM, let’s configure a job with the Removal Script that you can find in my previous post. Don’t forget to change the storage account, the access key and the container in the script into your configuration.
Your job step should look like this. Choose the Type=Powershell and not T-SQL.


When you execute the job manually, you will probably get the following error message The term ‘New-AzureStorageContext’ is not recognized as the name of a cmdlet…


This is because we didn’t install the Azure PowerShell cmdlets yet. You can download the cmdlets here. Now, when you try to download this on the Azure VM, you will get the following error:


This is because your browser settings don’t allow you to download files. To fix this, open your Internet Explorer settings and select Internet Options


Select the security tab and click the button Custom level

Scroll down to Downloads and select the option Enable under “File download”


Click OK, close the settings and restart your browser. Now you should be able to download the file to install the Azure PowerShell Cmdlets


Save it and Run the file

First of all you will get this nice welcome screen…

Wait until it is finished and you see the next screen.
Just click Install

You will see that it will install the Microsoft Azure Powershell. Click “I accept” in the next screen

…and wait till the installation is finished

Click finish at the end

In the final screen you should actually see that the Azure Powershell Cmdlets are installed

Ok… that’s it!!!

Go back to the SQL Agent Job and execute it again. Normally the job should execute successfully!

To be 100% sure that it’s working, go to the Management portal of Windows Azure and verify the container if the files have been removed.

Have fun with it!

Cloud Witness in Windows Server v.Next

Last week, I attended the session “Did you vote today? A DBA’s guide to cluster quorum” at PASS Summit, by Allan Hirt.
Interesting stuff, and I learned that my quorum configuration is already done correctly according to what Allan explained.

Another interesting part was that Allan announced that there is a new type of quorum in Windows Server v.Next.

Cloud Witness!

Instead of using a File Share Witness or Disk Witness, you can now also choose Cloud Witness.
Basically it’s going to create a file on your Azure Storage which counts as a quorum vote.
This cloud witness can be really helpful in case of multi-site clusters.
With multi-site clusters, there is always the question in which datacenter you are going to put the file share witness.
In fact, to configure it correctly, you should put your file share witness in a third datacenter. But that’s really too expensive for most companies just for a file share witness. The cloud witness provides a good alternative for this problem.

The only “issue” that I see with this cloud witness, is that your cluster nodes must have connection with the internet. And honestly, I haven’t seen much SQL Servers or cluster nodes that are able to connect to the internet.
But with the appropriate firewall settings, you should be OK.

I’ve already installed a 2 node Windows vNext Failover Cluster and tried it out.
It’s actually really easy to configure.

Start with opening the Failover Cluster Manager and connect to your cluster.
Right click on the cluster name à More Actions à Configure Cluster Quorum Settings…

In the “Select Quorum Configuration Option” windows, select “Select the quorum witness”

In the “Select Quorum Witness” windows, select “Configure a cloud witness”

To configure the cloud witness, you need to specify your storage account and your Azure storage account key.

This information can be found on the Azure Portal. Just go to the storage option. On the bottom of your screen you will see a button called Manage Access Keys.

Click on that button, copy one of the 2 keys and paste it in the Azure storage field of the cloud witness configuration

Your configuration should look similar like this screen shot below

Finally, complete the wizard and if all went well you have now configured your cloud witness.

When you look at your storage account in the Azure Portal, you will notice that a new container, “msft-cloud-witness”, is created with 1 blob file inside.

Pretty cool if you ask me :D !

Pieter