Monitoring the progress of CreateIndex

One of the frustrating aspects of SQL Server is the lack of progress indicators. One such command falling into this pit is CreateIndex. However, this is a great link for a solution;

Step 1 – Before you run the following query make sure you have ‘include actual execution plan’ option set in query analyser. If you don’t have this, or similar, then the DMV won’t produce any results.

Step 2 – Select @@SPID

Step 3 – run your Create Index in the same query window as step 2

Step 4 – Create a new query with the following SQL, replace the SPID in the first line with that from step (2)

;WITH agg AS
SELECT SUM(qp.[row_count]) AS [RowsProcessed],
SUM(qp.[estimate_row_count]) AS [TotalRows],
MAX(qp.last_active_time) – MIN(qp.first_active_time) AS [ElapsedMS],
MAX(IIF(qp.[close_time] = 0 AND qp.[first_row_time] > 0,
N'<Transition>’)) AS [CurrentStep]
FROM sys.dm_exec_query_profiles qp
WHERE qp.[physical_operator_name] IN (N’Table Scan’, N’Clustered Index Scan’, N’Sort’)
AND   qp.[session_id] = @SPID
), comp AS
([TotalRows] – [RowsProcessed]) AS [RowsLeft],
([ElapsedMS] / 1000.0) AS [ElapsedSeconds]
FROM   agg
SELECT [CurrentStep],
(([RowsProcessed] * 1.0) / [TotalRows]) * 100) AS [PercentComplete],
(([ElapsedSeconds] / [RowsProcessed]) * [RowsLeft]) AS [EstimatedSecondsLeft],
(([ElapsedSeconds] / [RowsProcessed]) * [RowsLeft]),
GETDATE()) AS [EstimatedCompletionTime]
FROM   comp;

Problem shrinking tempdb – could not be moved because it is a work table page

SQL holds a lot of cached information in tempdb. Until it’s cleared it won’t allow tempdb to be shrunk. So here’s a quick script you might have some joy with (obviously be aware that you will suffer the cost of losing this cached information) (thanks to MSDN);

USE [tempdb]













How to Backup SQL to Azure Storage

SQL Server 2016 allows you to backup a database to Azure Storage. I found the information about this to be disjoint at best. So, if you want to do an old style (non-snapshot) backup of a database this is how I achieved it;


My Server is running on a Virtual Machine in on the same Azure Resource Group as the storage account, I’m not too concerned about SQL knowing the storage account keys, so I’m going to show the shortest path to getting this working


Assume we’re backing up a Database called Products.

  1. Create Storage Account – Create a ‘standard’ non premium storage account. Don’t be tempted to use premium, it will not work. Let’s call it sqlnoprembackup
  2. Create a container in the blob storage, perhaps ‘backupsql1’. Grab the full path to the container, if you’re using the portal select the container and view the properties. In this example it is; ‘
  3. Grab the account name and key for the storage account (Access Keys if you’re using the portal). Here it is sqlnoprembackup and the first long key.
  4. Create the certificate on SQL Server, let’s call it backupstorage;
    (SELECT * FROM sys.credentials  
    WHERE name = ‘backupstorage’) 
    CREATE CREDENTIAL [backupstorage] WITH IDENTITY = ‘sqlnoprembackup’ 
    ,SECRET = ‘<insert secret storage key from (3) here>’; 

  6. Backup the database, use the full path to the container from (2). Note stats=5 will display the % progress at 5% internals in the messages tab;
    TO URL = ‘’  
    WITH CREDENTIAL = ‘backupstorage’  
         ,STATS = 5; 

  8. Run it. Yay.

    Restoring the database

    To do a full restore of the database;

  1. Take the database offline;
  2. To restore the database;
  3. RESTORE DATABASE Products FROM URL = ‘’  
    WITH CREDENTIAL = ‘backupstorage’, REPLACE, STATS=5 

    NB, take a look at Snapshot backups, it’s a much better solution. Sometimes, as in my case, it’s not available and this hybrid approach works quite nicely.

    SQL Server – The log cannot be rebuilt because

    I had the blood-drawn-from-face moments when a database stated is was ‘pending recovery’. Ultimately the transaction log file was broken. Actually I didn’t care about that because I knew it was pretty much all read-only data. So how to recover it? I tried various recovery techniques but the server was adamant, ‘restore from backup’. Well, I couldn’t do that either. So finally I found this little nugget of DDL, create a new database using the existing data files but with a new transaction log;
    CREATE DATABASE [<new dbname>]

    ( NAME = N'<filename>’, FILENAME = N'<path to>.mdf’  ),
    ( NAME = N'<filename2>’, FILENAME = N'<path to 2nd>.mdf’ )


    SQL Server with external drives – pending recovery

    When you are developing a SQL Server database you might want to use External Storage devices. Beware of a couple of slight gotchas. When you restart your machine SQL Server may start before the drives become available. If this happens SQL Server will mark the database as ‘pending recovery’. To get out of this situation just restart the SQL Server instance and select ‘refresh’ the database node in SQL Management Studio. A second, related problem, is that the drives may not be assigned the same drive letter, this will again result in ‘pending recovery’. If you cannot remember the drive letter you assigned you can run the following against the system database;

    select [name], physical_name from sys.master_files

    If you now have an incorrectly assigned drive letter, the best solution is to open the disk management tool where you can right-click and assign the original drive letter back.

    How to connect to a SQL2008 machine

    Ok I admit it, I’m a developer and configuring access to SQL Server is not my bag (baby). SQL 2005 had a nice big Surface Area Configuration button than just allowed access from clients, so I ignored what it was doing, pressed it and got on with my work. However, SQL 2008 seems to be have been targeted at me because it no longer has such a backdoor and forced me to find out what I should do. So I thought I’d write it down for others who prefer to search the web rather then read the documentation <oops>,

    First off you need to open the SQL Configuration Manager and check your favourite protocol is enabled, for me that would be TCP/IP. Double click that and select the IP Addresses tab. You’ll notice in there a list of possible IP addresses and ports for SQL to listen on. You can, if you maintain your (mine) lazy tendencies simply change the setting from dynamic to static and plug in port 1433. When clients attempt to connect to your server they attempt to use 1433. Now this isn’t supposed to be a good idea (no idea why – you’ll see why later) so stick with dynamic ports. Now this means that SQL will listening on some port but you’ve no idea what as it may change each time the server starts. So you must start the SQL Browser Service, this acts as the go-between and “tells” the client what port to use. So there you are, you should now be off and running. But why not use 1433? I thought the idea was slow those pesky hackers down, but if you have to open the Browser Service then what does that achieve? Something I need to look into, when I have the time, when I have the time, when I have the time…..

    Counting the occurrences of a character in a column

    I needed to count the number of periods in a nvarchar column and, as usual, there is no specific help in SQL’s string library. I don’t want to go the CLR route as it’s a maintenance script and can’t add CLR components. So here comes a bit of SQL gymnastics…or hack if you will…

    Problem connecting to an instance of a SQL Server Cluster

    I was asked to try and find out why a web server could not communicate with the database. After checking all the usual network configurations I was beginning to despair. Together with the sys-admin we started firing off telnet connections, pings, etc, all worked but still SQL clients failed to connect. Finally by luck more than judgement the sys-admin attempted to connect to the database without the SQL instance name. Viola it worked. It seems that when you make a clustered virtual server it forms an alias from both the server names *and* the instance name, i.e. MyServer\MyInstance simply becomes MyClusterServer. We hadn’t had this problem on previous builds because we normally just have the default instance.