It’s hard to convince clients to use SQL 2014

SQL Server 2014 has been released since the 1st of April and there are some really cool features that might help you and your organization and can definitely justify an upgrade. In my job as a SQL consultant, I have a lot of different customers and none of them is currently working on SQL Server 2014 or is planning to upgrade to this new version. I find it really hard to convince my clients to upgrade or even install it for a brand new installation.

Why is it so hard to convince people to start using SQL Server 2014…? After questioning my customers I figured out these 5 reasons?

I don’t trust it

This is probably the most famous argument of my customers. They don’t trust the new version because it has just been released and has probably a lot of bugs. Let’s wait till the first service pack has been released, and then we know it is stable… Most of my clients are telling me that there are just too many risks to install an RTM version of SQL Server. “What happens if something happens and we’re not prepared for it?” I don’t think Microsoft will release a version of SQL Server that is not stable or not fully tested. It’s true that in the beginning of a new release, more bugs are found, but I wouldn’t state that the new release is unstable. Microsoft is still releasing Service Packs and Cumulative Updates for older versions of SQL Server. Does this mean that this software is unstable? Well.., no…, because you’re probably using it right now. Bug fixes are always part of the software despite if it’s a new or old version.

Oh, is there a new release?

I’ve noticed that many people don’t know that a new version has been released. They just install the older version of which they know of. Or they have heard about a new release but are not aware of all the new cool functionalities that can be a benefit for their company. And that is where we, the SQL Family, come in. We are trying to make people aware of all these new features with SQL events, webinars, workshops, blogs,…

My vendor doesn’t support it yet…

Another reason that I hear a lot is that the vendor of a software package (which is using a SQL database), does not support the new version yet. I even had to install a brand new SQL Server 2005 last month because the vendor didn’t support another version… I told my customer to get rid of that software and find a good alternative. The vendor’s software will probably work on SQL Server 2014 but if you should use it, you will lose your full support on the software… Try the upgrade advisor and see if there is some exotic configuration that will not be working anymore. I doubt it… I usually send the report to my customer and the supplier. And I’ve noticed lately that suppliers are giving me a good response that they will support SQL Server 2014 in a next major release.

It’s too expensive

Upgrading to a new version of SQL Server is a project that requires some investments. Not only the license cost but also new hardware, re-coding your application, testing … Many customers are complaining that the enterprise edition has become really expensive but do they really need the enterprise edition? From SQL Server 2008 onwards, you can use the DMV sys.dm_db_persisted_sku_features to view all enterprise edition-specific features that are enabled in the current database. If they do need the enterprise edition, it might worth starting a consolidation project instead of and upgrade project to save license costs.

The upgrade can also save you money because new features can save resources like page and row compression, resource governor or the new T-SQL improvements can save your developers a lot of time.

No time, too much work

I hear this a lot… “Upgrade…? Sorry but we don’t have time for that… Why should we upgrade, the database is working just fine and we have other more important things on our agenda.” I don’t think that it has something to do with “no time”. Some people are just not motivated to do an upgrade. It’s hard to convince people that are not willing to learn and are not open to new ideas. And maybe that’s because of the above mentioned point, they don’t trust the software and it will cost the company a lot of money.

I don’t know if I’m the only one that is experiencing this is :) ? But I hope that I can change the perception of my clients … but it’s a tough job… believe me…

Visualize your backup schedules

Recently I was asked by a customer to provide the database backup timings for every SQL Server (only the full backups).
Originally, all the backups were taken at 19:00 on a SAN disk that was attach to each server directly. Once the backup was taken, it is then pushed to tape every night. But recently the customer changed strategy and all backups had to be put on a CIFS share of a NetApp. The backup location was changed but not the schedule. As a result, some servers were having problem with the backups. All servers were taking their backups at the same time and were transferred over the network to the same network share. The network load increased a lot J

So, I had to come up with a solution to get the start time and the duration of each backup job of each server. With that information, I could reschedule the jobs on each server so that the network load was spread during the night.

This blog post of Brad Schulz (Blog) helped me to get the information out of the sysjobs tables. I’ve changed the script a bit so that I got the average backup duration of each server. I’m using a Central Management Server so it was easy to get my information in one time.

SET NOCOUNT ON; 
declare @HistoryStartDate datetime,
		@HistoryEndDate datetime

set @HistoryStartDate=’19000101’
set	@HistoryEndDate = getdate();

with JobHistData as(  
	select j.name,
			date_executed=convert(datetime,convert(varchar(8),h.run_date)) +’ ‘+stuff(stuff(right(1000000+h.run_time,6),3,0,’:’),6,0,’:’),
			jobsecsduration=h.run_duration/10000*3600+h.run_duration%10000/100*60+h.run_duration%100,
			jobtimeduration=stuff(stuff(right(1000000+h.run_duration,6),3,0,’:’),6,0,’:’),
			jobstarttime=stuff(stuff(right(1000000+s.active_start_time,6),3,0,’:’),6,0,’:’)
	from msdb.dbo.sysjobhistory  h
	inner join msdb..sysjobs j on h.job_id = j.job_id
	inner join msdb..sysjobschedules js on js.job_id=j.job_id
	inner join msdb..sysschedules s on s.schedule_id=js.schedule_id
	where j.name=’DatabaseBackup – USER_DATABASES – FULL’
	and step_id=0     --Job Outcome
	and run_status=1  --Succeeded
	)

select Name,AvgDuration=convert(int,avg(jobsecsduration*1)),CONVERT(VARCHAR, DATEADD(second,convert(int,avg(jobsecsduration*1)),0),108) as AvgTime,JobStarttime
from JobHistData 
where date_executed>=dateadd(day,datediff(day,’19000101’,@HistoryStartDate),’19000101’)  and date_executed<dateadd(day,1+datediff(day,’19000101’,@HistoryEndDate),’19000101’)
group by Name,jobstarttime

The query gives me the following result (server names are hidden)

Nice overview but not really useful to reschedule the backup jobs. I needed something visual, with timelines, so that I could easily see which backups were running at the same time. I tried several programs but I came up with a solution in… (ow yes) Excel! To be honest, I’m not an expert in Excel 😉

First I paste the result of my query in Excel. Column C and D should have type Time.
I added some extra columns:

  • Current Endtime = Column C + D
  • Number Starttime = Column D but the type is Number
  • Number Endtime = Column E but the type is Number
  • Number Elapsed Time = Column G – F. Column should also have the Number type

On the second sheet add an empty Stacked Bar Chart

Right click on the chart and select “Select Data…”

The Chart data range should be the first column Server Name
Edit the Series:

  • Change the Series name in Starttime
  • Series values will be column D Current Starttime

Add a second Serie

  • Change the Series name in Duration
  • Series values will be column H Number Elapsed Time

Edit the Category

  • Values will be column A Server Name

You should now have a similar configuration

Now that we have all the data in our chart, let’s fix the layout

Right click on the Horizontal Axis and select Format Axis…

On your right, select the AXIS Options and change the following values

  • Bound Minimum    0,791666666666667    This is 19:00, the start time of my backups
  • Bound Maximum    1            This is 00:00
  • Units Major        0,010416666666666    This is every 15 minutes

As a result, the axis should now be a time value from 19:00:00 till 00:00:00 for every 15 minutes.

Right click on the Vertical Axis, go to AXIS Options and select Categories in reverse order. This will change the order of my server list and will put the horizontal axis on top.

The last thing that you need to do is to hide the Starttime series. We are only interested in duration of the backup starting from the start time. In the chart, right click the starttime series (this is normally the blue line) and select Format Data Series Go to the Fill option and select No fill

Now you should get a similar result like this

If you want, you can add some extra design to the chart. I finally came up with the following layout

This really gives me a nice overview how my backups are scheduled.
Setting up my new backup timings will be really easy now. I just have to change the start time on the first sheet.

You can get my Excel example here!

Have fun!

DBA Checklist with Policy Based Management

In my job as a SQL Server Database Consultant, I have my “recurring” customers. This means that every week, I’m at the customer for 1 day to do day-to-day DBA tasks.
Since those customers don’t have a DBA, I have to perform the daily DBA checklist on a weekly basis J… Initially, I’ve started with executing some T-SQL scripts to verify the SQL Servers. But as more servers were installed, my DBA checklist became really time consuming.

So, I had to find a way to perform the same checks in a more efficient way. I’ve started playing with the policy based management feature and I’ve created my “DBA Checklist” policies that I’m checking against all the SQL Servers at once.
Less time consuming and it gives you a nice overview of failed checks.

The first step that you need to do is creating a Central Management Server (CMS) and register all your servers under that CMS.
I’m not going into detail on this but you can find more information on http://msdn.microsoft.com/en-us/library/bb895144.aspx

I’ve created the following DBA Checklist

  • Verify if there were any failed SQL Agent Jobs during the last week
  • Verify if all the SQL Server Services are running
  • Verify if all databases are in a “Normal” state (not suspect, restoring, offline, …)
  • Verify free disk space
  • Verify free space in the data files
  • Verify if a full backup was taken of the database in the last 24 hours
  • Verify if a transaction log backup was taken of the database in the last hour
  • Verify if there were errors in the error log with a severity > 17 for the last 7 days

This checklist is pretty basic, I know, but it gives me a good idea if my SQL Servers are fine or not. Feel free to give me any suggestion of checks that I can add to the list.

You can download the policies from my SkyDrive and import them into your Central Management Server.
Please note that the policies will not work against a SQL Server 2000 or lower.

Connect to your CMS – Go to Management – Policy Management – Policies
Right click Policies and select “Import Policy”

 

Select the xml –files you just have downloaded as the “Files to import” and click OK.

 

If all went fine, you should get a result like the one below

 

Now let’s do a check against all my servers.
Right click on your CMS and select “Evaluate Policies”

As a source, select again the CMS server, because that’s the one that where you have just imported the policies

Select the policies that you want to verify. You can easily find my policies because I’ve put them in the category “DBA Checklist”

Now, just click the Evaluate button. As a result, all the selected policies will be checked against the servers that are registered under your CSM.
When the checks are finished, you should see this kind of result

Below, in the target details area you can see which checks are failed. Click on the “View” link to see more details and correct them.
In the end, you want to see all you’re policy checks green 😀 !

If you should have suggestions or you find any improvements, please let me know so that I can adapt them.

Have fun!

 

How to setup SSRS high availability with Standard Edition

Recently I was asked by a customer to build a high availability solution for their SQL reporting environment. The report server database was installed on a SQL Server Failover Cluster. Alright, let’s install a SSRS scale-out deployment. However… my customer didn’t want to buy a SQL Server 2012 Enterprise Edition, so I had to come up with a solution to provide high availability with a standard edition.

When you run 2 or more instances of the SSRS standard version against one SSRS database which is normal in the enterprise version, you will receive a message, that the scale-out deployment is not supported.

I found following blog post http://www.pawlowski.cz/2009/12/sql-server-2005-reporting-services-scale-out-deployment-ssrs-2005-scale-out-deployment-2/ of Pavel Pawlowski (Blog|Twitter) which actually explains what I needed. The idea is to replicate the report server databases from the primary server to the secondary server. A load balancer provides the failover functionality.

The picture below shows you how the setup should be.


In this blog, I’m going to explain how I installed the solution and what issue’s I had during installation and configuration.

Installing first server

On the first server, I have installed the SQL Server 2012 database engine + the SQL Server 2012 Reporting services. I slipstreamed the installation with SP1 CU2, so I’m using the latest patches of SQL Server. When the installation was finished, I just configured the reporting services in a very standard way as a stand-alone report server. The report server databases were created on the same server.

 

 

When the installation and configuration is finished, you have to take a backup of the encryption key. We will need it on the second server.

 

Installing second server

Next step is installing the secondary server. The method will be exactly the same as the first server. Just install the report server as it was a stand-alone server. Also make sure that you create the reporting server database on the server itself. Once the server has been setup, you have to restore the encryption key from the first node to be able to share encrypted data between the 2 servers. To do so

Open the report server configuration manager

Once you’re connected select Encryption Keys

And click the button Restore

Copy the encryption key backup file from node 1 to node 2.
Select the file location and type in the correct password that you have used for the backup.
Press OK to restore.

Drop the foreign key on the Notifications table as it will cause an error when setting up the replication later on.

Setup merge replication publication

After both report servers have been installed, you have to setup a merge replication for the reportserver DB. Pavel mentioned in his blog to use merge replication because it is necessary as users are automatically directed to the nodes by load balancers and we need automatically update all 3 nodes whenever there is any change on any node.

However, in my situation, the load balancer will always send the requests to the first server. In case of failure, it will point to the second node. The reason is that my customer only has a budget to buy one Standard license. So the second server is considered as a hot standby. This setup may lead to some difficulties with Microsoft licensing, but that’s another discussion.

I tried to work out the same configuration but tried to use another publication type.

  • Snapshot publication: not really useful in this situation
  • Transactional publication: I was able to setup the publication and subscription; however as soon as you tried to launch a report on the second server, you got primary key violation errors.
  • Peep-to-peer publication: Is only available in Enterprise Edition, so no option

Conclusion: Merge replication is the only possible publication that you can use here.

To setup the publication follow these steps:

Connect to the first SQL Server and go to Replication – Local Publications.
Right-click on Local Publications and select “New Publication”

Use the primary server as its own Distributor

 

Select the reportserver database. It’s not necessary to publish the reportservertempdb

 

Select the Merge Publication Option

 

Since I’m using 2 new installations with SQL Server 2012, I select the option”SQL Server 2008 or Later”

Next step is choosing the Articles for the publication. As Pavel mentions in his blog, we need to exclude the Keys table, because each instance has its own key stored in the table. Each instance has its own name and creates a new entry in the Keys table. And when there is more than one entry in the table, it is automatically detected as scale-out deployment, which is supported only by the Enterprise version.

The Events and Notifications tables are used for monitoring of Subscriptions and events in SSRS. To support correct subscriptions processing we cannot replicate those tables.

When the scheduled time comes for a scheduler , the SQL Server Agent generates an event by executing the scheduled job. The job inserts a row in the Event table of the ReportServer database. This row serves as an event for the Scheduling and Delivery Processor. The event record will be deleted after the process.

 

 

When you click Next, the wizard returns some Article issues

 

A uniqueidentifier column will be added to a few tables because they don’t contain a uniqueidentifier column. If you have a look at the tables, they are just containing some upgrade history. I have included those tables in my publication.

Strange issue, I have noticed is that, even if your table has a uniqueidentifier column, there is still an extra column added to the original table…

After I finished my tests, it looks like reporting services doesn’t have any problems with that extra column. I don’t know what will be the behavior when you will perform an update of the reporting services in the future. So possibly you could have some upgrade issues.

I don’t need to filter the data so, just click next

 

Create the snapshot immediately as we will add a subscription later on.

 

I have specified the Snapshot Agent security to the SQL Server Agent Account.

 

At the end, we want indeed to create the publication.

 

Complete the wizard by clicking Finish

 

This should be your result after the wizard has finished.

Setup merge replication subscription

Now we are ready to subscribe to the publication of the primary node database.
To do so, connect to the second node. Go to replication – Local Subscriptions.
Right click and select “New Subscriptions”

 

Select the first node, which contains your publication.

 

I have chosen to run the Merge Agent on the subscriber and not on the Distributor

 

Choose the database ReportServer as the subscription database. This is the empty database that was created during the installation.

 

Choose the impersonate process account option for the Subscriber

 

As the replication will be used for a high availability solution, I want the agent schedule to run continuously. The changes on both sides must be replicated as quickly as possible to the other server.

 

Choose to initialize the subscription immediately.

 

Select “Client” as Subscription Type. Pavel mentions in his blog to choose Server, but my situation is a little bit different. I only have one secondary server and all the report requests are sent to the primary node. So, in my case, it’s better to choose client.

 

We want to create the subscription at the end of the wizard

Click finish to end the wizard and to configure the subscription

 

Make sure that the SQL Agent Service account is created as a login

When the replication is working, you should get the following result in the Replication Monitor

 

I had a few issues with the creation of my snapshot, but that was caused by some security issues.

DDL Changes

While I was testing I have noticed that the delete of a subscription gave the following error.

Column name or number of supplied values does not match table definition.

I came to a conclusion that the stored AddSubscriptionToBeingDeleted was giving the error.
An insert was done into the table SubscriptionsBeingDeleted but the number of columns was not correct anymore because of the rowid column that was added for the merge replication. Below you can find the stored procedure that solves this issue. You should execute it on both servers

USE [ReportServer]
GO
ALTER PROCEDURE [dbo].[AddSubscriptionToBeingDeleted]
@SubscriptionID uniqueidentifier
AS
— Delete subscription if it is already in this table
— Delete orphaned subscriptions, based on the age criteria: > 10 minutes
delete from [SubscriptionsBeingDeleted]
where (SubscriptionID = @SubscriptionID) or(DATEDIFF( minute, [CreationDate], GetUtcDate() ) > 10)
— Add subscription being deleted into the DeletedSubscription table
insert into [SubscriptionsBeingDeleted] VALUES(@SubscriptionID, GetUtcDate(),newid())

Now you have a fully operational high-available reporting solution with SQL Server Standard Edition.

The client PC’s are connecting to a network load balancer. This load balancer is always sending the report requests towards the primary report server. The load balancer will point to the secondary server, in case of a failure of the primary server. The secondary server has been setup exactly the same way as the primary node. Between the 2 nodes we I have installed a SQL Server merge replication. This will replicate all the changes between primary and secondary and the other way around. With this configuration we provide a high availability solution.

Have fun with it!

Could not load file or assembly ‘Microsoft.VisualStudio.Tools.Applications.Core’

I was migrating SSIS packages from a SQL Server 2008 R2 server to a new SQL Server 2012.
When I loaded a package in the SQL Server Data Tools, I got the following error:

I was only getting this error when I was trying to upgrade a package with a Script Component in it.
When I tried to edit the script I also got the following error message:

Strangely enough, when I opened the same package on my computer, it worked fine.

After searching a while I found a solution to fix this issue.
The key is in the message “Cannot show Visual Studio 2010 Tools for Applications editor

When I looked at the “Program and Features” of my server I saw that the following components were installed

When I compared it to my computer “Program and Features”, I saw that an extra component was installed Microsoft Visual Studio Tools for Applications x86 Runtime 3.0

After my investigation, I found out that the problem is caused by the order in which you install SQL Server Integration Services 2012 and SQL Server Data tools.
So I played around and tested 3 different scenario”s:

  1. First Integration Services installation and secondly run the setup again to install SQL Server Data Tools

Actually, this was my original situation. I only installed Integration Services, but the customer also wanted the SQL Data Tools installed, so I added it afterwards.
When Integration Services was installed I saw that only one component was installed

Now, let”s install the SQL Server Data Tools. According to the installation wizard the Microsoft Visual Studio Tools for Applications 3.0 still needs to be installed (see picture below)

However, after the installation is finished, the Microsoft Visual Studio Tools for Applications Design-Time 3.0 have been added but the Microsoft Visual Studio Tools for Applications x86 Runtime 3.0 is still missing…

  1. First SQL Server Data Tools installation and secondly run the setup again to install Integration Services

I started again from a clean situation and installed first the SQL Server Data Tools. The Microsoft Visual Studio Tools for Applications Design-Time 3.0 and Microsoft Visual Studio Tools for Applications x86 Runtime 3.0 components have been installed.
As you might notice,the x64 Runtime is now missing…

No problem,let”s install Integration Services now. I expect that the Microsoft Visual Studio Tools for Applications x64 Runtime 3.0 will be installed since it was installed in my first test.
Well, suprisingly enough, it didn”t install it… The SQL Server Installation wizard saw that the Microsoft Visual Studio Tools For Applications 3.0 was already installed (see picture below)

And when I verified the “Program and Features”, the Microsoft Visual Studio Tools for Applications x64 Runtime 3.0 was still missing.

  1. Installation of SQL Server Data Tools and Integration Services at the same time

I started again from a clean situation and installed SQL Server Data Tools and Integration Services at the same time.
The SQL Server Installation Wizard detects that it still has to install the Microsoft Visual Studio Tools for Applications 3.0, which is a good thing.

Finally, both x64 and x86 Tools have been installed!

Now, let”s see if my package will upgrade now without any error?

YES, IT DID!!!

 

If you would like to have this issue fixed, please vote on the following link

https://connect.microsoft.com/SQLServer/feedback/details/776248/could-not-load-file-or-assembly-microsoft-visualstudio-tools-applications-core

Move Cluster Group is back!

I recently installed a Windows Server 8 beta cluster on my laptop. While I was exploring some of the new features, I noticed that Microsoft has put back the possibility to move the “cluster group” in the Failover Cluster Manager.

Since Windows Server 2008, it was not possible anymore to move the “cluster group” with the GUI of the Failover Cluster Manager. If you wanted to move it, you had to use the cluster.exe command.

cluster group “Cluster Group” /move

When I checked the Failover Cluster Manager I found the following option “Move Core Cluster Resources” on the “Actions” tab of the cluster itself.

image_thumb[1]

You even now have the possibility to choose to which node you’re going to failover in case you have more than 2 nodes, which is also a nice improvement.

Looks like Microsoft missed this feature after all and I think they will make a lot of people happy with this!