COPY_ONLY backups and Log Shipping


Last week I was in the process of migrating a couple of SQL Server instances from 2008 R2 to 2012.

In order to let the migration complete quickly, I set up log shipping from the old instance to the new instance. Obviously, the existing backup jobs had to be disabled, otherwise they would have broken the log chain.

That got me thinking: was there a way to keep both “regular” transaction log backups (taken by the backup tool) and the transaction log backups taken by log shipping?

 ls_architecture

The first thing that came to my mind was the COPY_ONLY option available since SQL Server 2005.

You probably know that COPY_ONLY backups are useful when you have to take a backup for a special purpose, for instance when you have to restore from production to test. With the COPY_ONLY option, database backups don’t break the differential base and transaction log backups don’t break the log chain.

My initial thought was that I could ship COPY_ONLY backups to the secondary and keep taking scheduled transaction log backups with the existing backup tools.

I was dead wrong.

Let’s see it with an example on a TEST database.

I took 5 backups:

  1. FULL database backup, to initialize the log chain. Please note that COPY_ONLY backups cannot be used to initialize the log chain.
  2. LOG backup
  3. LOG backup with the COPY_ONLY option
  4. LOG backup
  5. LOG backup with the COPY_ONLY option

The backup information can be queried from backupset in msdb:

SELECT
     ROW_NUMBER() OVER(ORDER BY bs.backup_start_date) AS [backup #]
    ,first_lsn
    ,last_lsn
    ,backup_start_date
    ,type
    ,is_copy_only
    ,DENSE_RANK() OVER(ORDER BY type, bs.first_lsn) AS sequence
FROM msdb.dbo.backupset bs
WHERE bs.database_name = 'TEST'

 backupsets

As you can see, the COPY_ONLY backups don’t truncate the transaction log and losing one of those backups wouldn’t break the log chain.

However, all backups always start from the first available LSN, which means that scheduled log backups taken without the COPY_ONLY option truncate the transaction log and make significant portions of the transaction log unavailable in the next COPY_ONLY backup.

You can see it clearly in the following picture: the LSNs highlighted in red should contain no gaps in order to be restored successfully to the secondary, but the regular TLOG backups break the log chain in the COPY_ONLY backups.

backupsets2

That means that there’s little or no point in taking COPY_ONLY transaction log backups, as “regular” backups will always determine gaps in the log chain.

When log shipping is used, the secondary server is the only backup you can have, unless you keep the TLOG backups or use your backup tool directly to ship the logs.

Why on earth should one take a COPY_ONLY TLOG backup (more than one at least) is beyond my comprehension, but that’s a whole different story.

Open SSMS Query Results in Excel with a Single Click


The problem

One of the tasks that I often have to complete is manipulate some data in Excel, starting from the query results in SSMS.

Excel is a very convenient tool for one-off reports, quick data manipulation, simple charts.

Unfortunately, SSMS doesn’t ship with a tool to export grid results to Excel quickly.

Excel offers some ways to import data from SQL queries, but none of those offers the rich query tools available in SSMS. A representative example is Microsoft Query: how am I supposed to edit a query in a text editor like this?

MSQuery

Enough said.

Actually, there are many ways to export data from SQL Server to Excel, including SSIS packages and the Import/Export wizard. Again, all those methods require writing your queries in a separate tool, often with very limited editing capabilities.

PowerQuery offers great support for data exploration, but it is a totally different beast and I don’t see it as an alternative to running SQL queries directly.

The solution

How can I edit my queries taking advantage of the query editing features of SSMS, review the results and then format the data directly in Excel?

The answer is SSMS cannot do that, but, fortunately, the good guys at Solutions Crew brought you a great tool that can do that and much more.

SSMSBoost is a free add-in that empowers SSMS with many useful features, among which exporting to Excel is just one. I highly suggest that you check out the feature list, because it’s really impressive.

Once SSMSBoost is installed, every time you right click a results grid, a context menu appears that lets you export the grid data to several formats.

No surprises, one of those formats is indeed Excel.

SSMSBoostExportExcel

The feature works great, even with relatively big result sets. However, it requires 5 clicks to create the Spreadsheet file and one more click to open it in Excel:

SSMSBoostOpenFile

So, where is the single click I promised in the title of this post?

The good news is that SSMSBoost can be automated combining commands in macros to accomplish complex tasks.

Here’s how to create a one-click “open in Excel” command:

First, open the SSMSBoost settings window clicking the “Extras” button.

SSMSBoostSettingsMenu

In the “Shortcuts & Macros” tab you can edit and add macros to the toolbar or the context menu and even assign a keyboard shortcut.

SSMSBoostSettingsWindow

Clicking the “definitions” field opens the macro editor

SSMSBoostEditDefinition

Select “Add” and choose the following command: “SSMSBoost.Connect.GridDataCopyTemplateAllGridsDisk3”. This command corresponds to the “Script all grids as Excel to disk” command in SSMSBoost.

Now save everything with OK and close. You will notice a new button in your toolbar:

SSMSScriptToExcelButton

That button allows to export all grids to Excel in a single click.

You’re almost there: now you just need something to open the Excel file automatically, without the need for additional clicks.

To accomplish this task, you can use a Powershell script, bound to a custom External Tool.

Open the External Tools editor (Tools, External Tools), click “Add” and type these parameters:

Title: Open last XML in Excel

Command: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe

Arguments: -File %USERPROFILE%\openLastExcel.ps1 -ExecutionPolicy Bypass

SSMSBoostEditDefinition2

Click OK to close the External Tools editor.

This command lets you open the last XML file created in the SSMSBoost output directory, using a small Powershell script that you have to create in your %USERPROFILE% directory.

The script looks like this:

## =============================================
## Author:      Gianluca Sartori - @spaghettidba
## Create date: 2014-01-15
## Description: Open the last XML file in the SSMSBoost
##              output dicrectory with Excel
## =============================================

sl $env:UserProfile

# This is the SSMSBoost 2012 settings file
# If you have the 2008 version, change this path
# Sorry, I could not find a registry key to automate it.
$settingsFile = "$env:UserProfile\AppData\Local\Solutions Crew\Ssms2012\SSMSBoostSettings.xml"

# Open the settings file to look up the export directory
$xmldata=[xml](get-content $settingsFile)

$xlsTemplate = $xmldata.SSMSBoostSettings.GridDataCopyTemplates.ChildNodes |
    Where-Object { $_.Name -eq "Excel (MS XML Spreadsheet)" }

$SSMSBoostPath = [System.IO.Path]::GetDirectoryName($xlsTemplate.SavePath)

$SSMSBoostPath = [System.Environment]::ExpandEnvironmentVariables($SSMSBoostPath)

# we filter out files created before (now -1 second)
$startTime = (get-date).addSeconds(-1);

$targetFile = $null;

while($targetFile -eq $null){
    $targetFile = Get-ChildItem -Path $SSMSBoostPath |
        Where-Object { $_.extension -eq '.xml' -and $_.lastWriteTime -gt $startTime } |
        Sort-Object -Property LastWriteTime |
        Select-Object -Last 1;

    # file not found? Wait SSMSBoost to finish exporting
    if($targetFile -eq $null) {
        Start-Sleep -Milliseconds 100
    }
};

$fileToOpen = $targetFile.FullName

# enclose the output file path in quotes if needed
if($fileToOpen -like "* *"){
    $fileToOpen = "`"" + $fileToOpen + "`""
}

# open the file in Excel
# ShellExecute is much safer than messing with COM objects...
$sh = new-object -com 'Shell.Application'
$sh.ShellExecute('excel', "/r " + $fileToOpen, '', 'open', 1)

Now you just have to go back to the SSMSBoost settings window and edit the macro you created above.

SSMSBoostEditDefinition3

In the definitions field click … to edit the macro and add a second step. The Command to select is “Tools.ExternalCommand1”.

Save and close everything and now your nice toolbar button will be able to open the export file in Excel automagically. Yay!

OpenInExcel

Troubleshooting

If nothing happens, you might need to change your Powershell Execution Policy. Remember that SSMS is a 32-bit application and you have to set the Execution Policy for the x86 version of Powershell.

Starting Powershell x86 is not easy in Windows 8/8.1, The documentation says to look up “Windows Powershell (x86)” in the start menu, but I could not find it.

The easiest way I have found is through another External Tool in SSMS. Start SSMS as an Administrator (otherwise the UAC will prevent you from changing the Execution Policy) and configure an external tool to run Powershell. Once you’re in, type “Set-ExecutionPolicy Remotesigned” and hit return. The external tool in your macro will now run without issues.

Bottom line

Nothing compares to SSMS when it comes down to writing queries, but Excel is the best place to format and manipulate data.

Now you have a method to take advantage of the best of both worlds. And it only takes one single click.

Enjoy.

2013 in review


The WordPress.com stats helper monkeys prepared a 2013 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 19,000 times in 2013. If it were a concert at Sydney Opera House, it would take about 7 sold-out performances for that many people to see it.

Click here to see the complete report.

Copy user databases to a different server with PowerShell


Sometimes you have to copy all user databases from a source server to a destination server.

Copying from development to test could be one reason, but I’m sure there are others.

Since the question came up on the forums at SQLServerCentral, I decided to modify a script I published some months ago to accomplish this task.

Here is the code:

## =============================================
## Author:      Gianluca Sartori - @spaghettidba
## Create date: 2013-10-07
## Description: Copy user databases to a destination
##              server
## =============================================
cls
sl "c:\"
$ErrorActionPreference = "Stop"

# Input your parameters here
$source = "SourceServer\Instance"
$sourceServerUNC = "SourceServer"
$destination = "DestServer\Instance"

# Shared folder on the destination server
# For instance "\\DestServer\D$"
$sharedFolder = "\\DestServer\sharedfolder"
# Path to the shared folder on the destination server
# For instance "D:"
$remoteSharedFolder = "PathOfSharedFolderOnDestServer"

$ts = Get-Date -Format yyyyMMdd

#
# Read default backup path of the source from the registry
#

$SQL_BackupDirectory = @"
    EXEC master.dbo.xp_instance_regread
        N'HKEY_LOCAL_MACHINE',
        N'Software\Microsoft\MSSQLServer\MSSQLServer',
        N'BackupDirectory'
"@

$info = Invoke-sqlcmd -Query $SQL_BackupDirectory -ServerInstance $source

$BackupDirectory = $info.Data

#
# Read master database files location
#
$SQL_Defaultpaths = "
    SELECT *
    FROM (
        SELECT type_desc,
            SUBSTRING(physical_name,1,LEN(physical_name) - CHARINDEX('\', REVERSE(physical_name)) + 1) AS physical_name
        FROM master.sys.database_files
    ) AS src
    PIVOT( MIN(physical_name) FOR type_desc IN ([ROWS],[LOG])) AS pvt
"

$info = Invoke-sqlcmd -Query $SQL_Defaultpaths -ServerInstance $destination

$DefaultData = $info.ROWS
$DefaultLog = $info.LOG

#
# Process all user databases
#
$SQL_FullRecoveryDatabases = @"
    SELECT name
    FROM master.sys.databases
    WHERE name NOT IN ('master', 'model', 'tempdb', 'msdb', 'distribution')
"@

$info = Invoke-sqlcmd -Query $SQL_FullRecoveryDatabases -ServerInstance $source

$info | ForEach-Object {

    try {

        $DatabaseName = $_.Name

        Write-Output "Processing database $DatabaseName"

        $BackupFile = $DatabaseName + "_" + $ts + ".bak"
        $BackupPath = $BackupDirectory + "\" + $BackupFile
        $RemoteBackupPath = $remoteSharedFolder + "\" + $BackupFile

        $SQL_BackupDatabase = "BACKUP DATABASE $DatabaseName TO DISK='$BackupPath' WITH INIT, COPY_ONLY, COMPRESSION;"

        #
        # Backup database to local path
        #
        Invoke-Sqlcmd -Query $SQL_BackupDatabase -ServerInstance $source -QueryTimeout 65535

        Write-Output "Database backed up to $BackupPath"

        $BackupPath = $BackupPath

        $BackupFile = [System.IO.Path]::GetFileName($BackupPath)

        $SQL_RestoreDatabase = "
            RESTORE DATABASE $DatabaseName
            FROM DISK='$RemoteBackupPath'
            WITH RECOVERY, REPLACE,
        "

        $SQL_RestoreFilelistOnly = "
            RESTORE FILELISTONLY
            FROM DISK='$RemoteBackupPath';
        "

        #
        # Move the backup to the destination
        #

        $remotesourcefile = $BackupPath.Substring(1, 2)
        $remotesourcefile = $BackupPath.Replace($remotesourcefile, $remotesourcefile.replace(":","$"))
        $remotesourcefile = "\\" + $sourceServerUNC + "\" + $remotesourcefile

        Write-Output "Moving $remotesourcefile to $sharedFolder"
        Move-Item $remotesourcefile $sharedFolder -Force

        #
        # Restore the backup on the destination
        #
        $i = 0
        Invoke-Sqlcmd -Query $SQL_RestoreFilelistOnly -ServerInstance $destination -QueryTimeout 65535 | ForEach-Object {
            $currentRow = $_
            $physicalName = [System.IO.Path]::GetFileName($CurrentRow.PhysicalName)
            if($CurrentRow.Type -eq "D") {
                $newName = $DefaultData + $physicalName
            }
            else {
                $newName = $DefaultLog + $physicalName
            }
            if($i -gt 0) {$SQL_RestoreDatabase += ","}
            $SQL_RestoreDatabase += " MOVE '$($CurrentRow.LogicalName)' TO '$NewName'"
            $i += 1
        }

        Write-Output "invoking restore command: $SQL_RestoreDatabase"
        Invoke-Sqlcmd -Query $SQL_RestoreDatabase -ServerInstance $destination -QueryTimeout 65535
        Write-Output "Restored database from $RemoteBackupPath"

        #
        # Delete the backup file
        #
        Write-Output "Deleting $($sharedFolder + "\" + $BackupFile) "
        Remove-Item $($sharedFolder + "\" + $BackupFile) -ErrorAction SilentlyContinue

    }
    catch {
       Write-Error $_
    }

}

It’s a quick’n’dirty script, I’m sure there might be something to fix here and there.  Just drop a comment if you find something.

Ten features you had in Profiler that are missing in Extended Events


Oooooops!

F1CRASH

I exchanged some emails about my post with Jonathan Kehayias and looks like I was wrong on many of the points I made.

I don’t want to keep misleading information around and I definitely need to fix my wrong assumptions.

Unfortunately, I don’t have the time to correct it immediately and I’m afraid it will have to remain like this for a while.

Sorry for the inconvenience, I promise I will try to fix it in the next few days.

Error upgrading MDW from 2008R2 to 2012


Some months ago I posted a method to overcome some quirks in the MDW database in a clustered environment.

Today I tried to upgrade that clustered instance (in a test environment, fortunately) and I got some really annoying errors.

Actually, I got what I deserved for messing with the system databases and I wouldn’t even dare posting my experience if  it wasn’t cause by something I suggested on this blog.

However, every cloud has a silver lining: many things can go wrong when upgrading a cluster and the resolution I will describe here can fit many failed upgrade situations.

So, what’s wrong with the solution I proposed back in march?

The offending line in that code is the following:

EXEC sp_rename 'core.source_info_internal', 'source_info_internal_ms'

What happened here? Basically, I renamed a table in the MDW and I created a view in its place.

One of the steps of the setup process tries to upgrade the MDW database with a script, that is executed at the first startup on an upgraded cluster node.

The script fails and the following message is found in the ERRORLOG:

Creating table [core].[source_info_internal]...
2013-10-24 09:21:02.99 spid8s Error: 2714, Severity: 16, State: 6.
2013-10-24 09:21:02.99 spid8s There is already an object named 'source_info_internal' in the database.
2013-10-24 09:21:02.99 spid8s Error: 912, Severity: 21, State: 2.
2013-10-24 09:21:02.99 spid8s Script level upgrade for database 'master' failed because upgrade step 'upgrade_ucp_cmdw.sql' encountered error 3602, state 51, severity 25. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the 'master' database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.
2013-10-24 09:21:02.99 spid8s Error: 3417, Severity: 21, State: 3.
2013-10-24 09:21:02.99 spid8s Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.

Every attempt to bring the SQL Server resource online results in a similar error message.

The only way to fix the error is to restore the initial state of the MDW database, renaming the table core.source_info_internal_ms to its original name.

But how can it be done, since the instance refuses to start?

Microsoft added and documented trace flag 902, that can be used to bypass the upgrade scripts at instance startup.

Remember that the startup parameters of a clustered instance cannot be modified while the resource is offline, because the registry checkpointing mechanism will restore the registry values stored in the quorum disk while bringing the resource online.

There are three ways to start the instance in this situation:

  1. modify the startup parameters by disabling checkpointing
  2. modify the startup parameters in the quorum registry hives
  3. start the instance manually at the command prompt

Method N.3 is the simplest one in this situation and is what I ended up doing.

Once the instance started, I renamed the view core.source_info_internal and renamed core.source_info_internal_ms to its original name.

The instance could then be stopped (CTRL+C at the command prompt or SHUTDOWN WITH NOWAIT in sqlcmd) and restarted removing the trace flag.

With the MDW in its correct state, the upgrade scripts completed without errors and the clustered instance could be upgraded to SQL Server 2012 without issues.

Lessons learned:

  1. Never, ever mess with the system databases. The MDW is not technically a system database, but it’s shipped by Microsoft and should be kept untouched. If you decide you absolutely need to modify something there, remember to undo your changes before upgrading and applying service packs.
  2. Always test your environment before upgrading. It took me 1 hour to fix the issue and not every upgrade scenario tolerates 1 hour of downtime. Think about it.
  3. Test your upgrade.
  4. Did I mention you need to test?

Jokes aside, I caught my error in a test environment and I’m happy it was not in production.

As the saying goes, better safe than sorry.

SQL Server services are gone after upgrading to Windows 8.1


Yesterday I upgraded my laptop to Windows 8.1 and everything seemed to have gone smoothly.

I really like the improvements in Windows 8.1 and I think they’re worth the hassle of an upgrade if you’re still on Windows 8.

As I was saying, everything seemed to upgrade smoothly. Unfortunately, today I found out that SQL Server services were gone.

My configuration manager looked like this:

ConfigMan1

My laptop had an instance of SQL Server 2012 SP1 Developer Edition and the windows upgrade process had deleted all SQL Server services but SQL Server Browser.

I thought that a repair would fix the issue, so I took out my SQL Server iso and ran the setup.

Unfortunately, during the repair process, something went wrong and it complained multiple times about “no mappings between Security IDs and account names” or something similar.

Anyway, the setup completed and the services were back in place, but were totally misconfigured.

ConfigMan1

SQL Server agent had start mode “disabled” and the service account had been changed to “localsystem” (go figure…)

After changing start mode and service accounts, everything were back to normal.

I hope this post helps others that are facing the same issue.

Check SQL Server logins with weak password


SQL Server logins can implement the same password policies found in Active Directory to make sure that strong passwords are being used.

Unfortunately, especially for servers upgraded from previous versions, the password policies are often disabled and some logins have very weak passwords.

In particular, some logins could have the password set as equal to the login name, which would by one of the first things I would try to hack a server.

Are you sure none of your logins has such a poor password?

PowerShell to the rescue!

try {
    if((Get-PSSnapin -Name SQlServerCmdletSnapin100 -ErrorAction SilentlyContinue) -eq $null){
        Add-PSSnapin SQlServerCmdletSnapin100
    }
}
catch {
    Write-Error "This script requires the SQLServerCmdletSnapIn100 snapin"
    exit
}

cls

# Query server names from your Central Management Server
$qry = "
SELECT server_name
FROM msdb.dbo.sysmanagement_shared_registered_servers
"

$servers = Invoke-Sqlcmd -Query $qry -ServerInstance "YourCMSServerGoesHere"

# Extract SQL Server logins
# Why syslogins and not sys.server_principals?
# Believe it or not, I still support a couple of SQL Server 2000
$qry_logins = "
SELECT loginname, sysadmin
FROM syslogins
WHERE isntname = 0
AND loginname NOT LIKE '##%##'
"

$dangerous_logins = @()

$servers | % {
    $currentServer = $_.server_name
    $logins = Invoke-Sqlcmd -Query $qry_logins -ServerInstance $currentServer

    $logins | % {

        $currentLogin = $_.loginname
        $isSysAdmin = $_.sysadmin

        try {
            # Attempt logging in with login = password
            $one = Invoke-Sqlcmd -Query "SELECT 1" -ServerInstance $currentServer -Username $currentLogin -Password $currentLogin -ErrorAction Stop
            # OMG! Login successful
            # Add the login to $dangerous_logins
            $info = @{}
            $info.LoginName = $currentLogin
            $info.Sysadmin = $isSysAdmin
            $info.ServerName = $currentServer
            $loginInfo = New-Object -TypeName PsObject -Property $info
            $dangerous_logins += $loginInfo
        }
        catch {
            # If the login attempt fails, don't add the login to $dangerous_logins
        }

    }
}

#display dangerous logins
$dangerous_logins

SQL2014: Defining non-unique indexes in the CREATE TABLE statement


Now that my SQL Server 2014 CTP1 virtual machine is ready, I started to play with it and some new features and differences with the previous versions are starting to appear.

What I want to write about today is a T-SQL enhancement to DDL statements that brings in some new interesting considerations.

SQL Server 2014 now supports a new T-SQL syntax that allows defining an index in the CREATE TABLE statement without having to issue separate CREATE INDEX statements.

Up to now, the same could be achieved only with PRIMARY KEY and UNIQUE constraints, thus allowing UNIQUE indexes only.

For instance, the following statement creates a table with a unique clustered index on order_id and a unique nonclustered index on PO_number:

CREATE TABLE #orders (
     order_id uniqueidentifier NOT NULL
        PRIMARY KEY CLUSTERED DEFAULT NEWSEQUENTIALID()
    ,PO_number varchar(50) NOT NULL UNIQUE
    ,order_date datetime NOT NULL
    ,total_amount decimal(18,3)
)

OK, but what if I want to add a non-unique index to my table?

SQL Server 2014 offers a new syntax to do that inline with the table DDL:

CREATE TABLE #orders (
     order_id uniqueidentifier NOT NULL
        PRIMARY KEY CLUSTERED DEFAULT NEWSEQUENTIALID()
    ,PO_number varchar(50) NOT NULL UNIQUE
     -- creates a nonclustered index on order_date
    ,order_date datetime NOT NULL INDEX IX_order_date
    ,total_amount decimal(18,3)
)

A similar syntax can be used to create a compound index:

CREATE TABLE #orders (
     order_id uniqueidentifier NOT NULL
        PRIMARY KEY CLUSTERED DEFAULT NEWSEQUENTIALID()
    ,PO_number varchar(50) NOT NULL UNIQUE
    ,order_date datetime NOT NULL INDEX IX_order_date
    ,total_amount decimal(18,3)
    -- creates a compound index on PO_number and order_date
    ,INDEX IX_orders_compound(PO_number, order_date)
)

An interesting aspect of this new syntax is that it allows creating non-unique nonclustered indexes to table variables, which is something that couldn’t be done in the previous versions.

The syntax to use is the same as for permanent tables:

DECLARE @orders TABLE (
     order_id uniqueidentifier NOT NULL
        PRIMARY KEY CLUSTERED DEFAULT NEWSEQUENTIALID()
    ,PO_number varchar(50) NOT NULL UNIQUE
    ,order_date datetime NOT NULL INDEX IX_order_date
    ,total_amount decimal(18,3)
)

Cool! But, wait: does this mean that table variables will now behave in the same way permanent tables do?

Not exactly.

Table variables don’t have statistics, and being able to create indexes on them won’t change anything in this regard.

Do you want a proof? OK, the skeptics can run the following code. Please make sure you capture the actual execution plan.

SET NOCOUNT ON;

-- create the table variable
DECLARE @orders TABLE (
     order_id uniqueidentifier NOT NULL
        PRIMARY KEY CLUSTERED DEFAULT NEWSEQUENTIALID()
    ,PO_number varchar(50) NOT NULL UNIQUE
    ,order_date datetime NOT NULL INDEX IX_order_date
    ,total_amount decimal(18,3)
)

-- insert some data
INSERT INTO @orders (order_date, PO_number, total_amount)
SELECT
     order_date   = DATEADD(second, CHECKSUM(NEWID()), GETDATE())
    ,PO_number    = CAST(NEWID() AS varchar(50))
    ,total_amount = CHECKSUM(NEWID()) / 1000.0
FROM sys.all_columns

-
SELECT COUNT(*)
FROM @orders
WHERE order_date > GETDATE()
OPTION (
     -- activate some (undocumented) trace flags to show
     -- statistics usage. More information on the flags
     -- can be found on Paul White's blog:
     -- http://sqlblog.com/blogs/paul_white/archive/2011/09/21/how-to-find-the-statistics-used-to-compile-an-execution-plan.aspx

     -- redirect output to the messages tab
     QUERYTRACEON 3604
     -- show "interesting" statistics
    ,QUERYTRACEON 9292
    -- show loaded statistics
    ,QUERYTRACEON 9402
    -- add RECOMPILE to let the optimizer "see"
    -- the table cardinality
    ,RECOMPILE
)

The output of the above batch is empty. Looks like no stats were loaded.

The actual execution plan confirms that no stats were loaded and the estimated cardinality of the table variable is way off:

plan_1

If we repeat the test with a temporary table, we see a different behaviour.

SET NOCOUNT ON;

IF OBJECT_ID('tempdb..#orders') IS NOT NULL
    DROP TABLE #orders;

CREATE TABLE #orders (
     order_id uniqueidentifier NOT NULL
        PRIMARY KEY CLUSTERED DEFAULT NEWSEQUENTIALID()
    ,PO_number varchar(50) NOT NULL UNIQUE
    ,order_date datetime NOT NULL INDEX IX_order_date
    ,total_amount decimal(18,3)
)

INSERT INTO #orders (order_date, PO_number, total_amount)
SELECT
     order_date   = DATEADD(second, CHECKSUM(NEWID()), GETDATE())
    ,PO_number    = CAST(NEWID() AS varchar(50))
    ,total_amount = CHECKSUM(NEWID()) / 1000.0
FROM sys.all_columns

SELECT COUNT(*)
FROM #orders
WHERE order_date > GETDATE()
OPTION (
     QUERYTRACEON 3604
    ,QUERYTRACEON 9292
    ,QUERYTRACEON 9402
)

This time the messages tab contains some output:

Stats header loaded: DbName: tempdb, ObjName: #orders, IndexId: 2, ColumnName: order_date, EmptyTable: FALSE

Stats header loaded: DbName: tempdb, ObjName: #orders, IndexId: 2, ColumnName: order_date, EmptyTable: FALSE

The optimizer identified the statistics on the oder_date column as “interesting” and then loaded the stats header.

Again, the actual execution plan confirms that a better estimation is available:

plan_2

The lack of statistics has always been the most significant difference between table variables and temporary tables and SQL2014 doesn’t appear to change the rules (yet).

SQL Server 2014 CTP1 available for download


The day has come at last! SQL Server 2014 CTP1 is available for download!

You can now start playing with Hekaton and all the exciting features included in this version.

Mi suggestion is to create a new virtual machine for your playground: this CTP version won’t install on a machine with other installations of SQL Server.

If you prefer it and you have a subscription, you can also try SQL Server 2014 on Windows Azure.

While you’re at it, take the time to read the documentation carefully. You’ll soon discover that the in-memory engine has some limitations. I hope that the next versions will overcome them and give us a more usable software.

What are you waiting for? Download now!

Follow

Get every new post delivered to your Inbox.

Join 509 other followers