Monthly Archives: February 2013
dba_runCHECKDB v2(012)
If you are one among the many that downloaded my consistency check stored procedure called “dba_RunCHECKDB”, you may have noticed a “small” glitch… it doesn’t work on SQL Server 2012!
This is due to the resultset definition of DBCC CHECKDB, which has changed again in SQL Server 2012. Trying to pipe the results of that command in the table definition for SQL Server 2008 produces a column mismatch and it obviously fails.
Fixing the code is very easy indeed, but I could never find the time to post the corrected version until today.
Also, I had to discover the new table definition for DBCC CHECKDB, and it was not just as easy as it used to be in SQL Server 2008. In fact, a couple of days ago I posted a way to discover the new resultset definition working around the cumbersome metadata discovery feature introduced in SQL Server 2012.
Basically, the new output of DBCC CHECKDB now includes 6 new columns:
CREATE TABLE ##DBCC_OUTPUT( Error int NULL, [Level] int NULL, State int NULL, MessageText nvarchar(2048) NULL, RepairLevel nvarchar(22) NULL, Status int NULL, DbId int NULL, -- was smallint in SQL2005 DbFragId int NULL, -- new in SQL2012 ObjectId int NULL, IndexId int NULL, PartitionId bigint NULL, AllocUnitId bigint NULL, RidDbId smallint NULL, -- new in SQL2012 RidPruId smallint NULL, -- new in SQL2012 [File] smallint NULL, Page int NULL, Slot int NULL, RefDbId smallint NULL, -- new in SQL2012 RefPruId smallint NULL, -- new in SQL2012 RefFile smallint NULL, -- new in SQL2012 RefPage int NULL, RefSlot int NULL, Allocation smallint NULL )
If you Google the name of one of these new columns, you will probably find a lot of blog posts (no official documentation, unfortunately) that describes the new output of DBCC CHECKDB, but none of them is strictly correct: all of them indicate the smallint columns as int.
Not a big deal, actually, but still incorrect.
I will refrain from posting the whole procedure here: I updated the code in the original post, that you can find clicking here. You can also download the code from the Code Repository.
As usual, suggestions and comments are welcome.
Discovering resultset definition of DBCC commands in SQL Server 2012
Back in 2011 I showed a method to discover the resultset definition of DBCC undocumented commands.
At the time, SQL Server 2012 had not been released yet and nothing suggested that the linked server trick could stop working on the new major version. Surprisingly enough it did.
If you try to run the same code showed in that old post on a 2012 instance, you will get a quite explicit error message:
DECLARE @srv nvarchar(4000); SET @srv = @@SERVERNAME; -- gather this server name -- Create the linked server EXEC master.dbo.sp_addlinkedserver @server = N'LOOPBACK', @srvproduct = N'SQLServ', -- it’s not a typo: it can’t be “SQLServer” @provider = N'SQLNCLI', -- change to SQLOLEDB for SQLServer 2000 @datasrc = @srv; -- Set the authentication to "current security context" EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname = N'LOOPBACK', @useself = N'True', @locallogin = NULL, @rmtuser = NULL, @rmtpassword = NULL; USE tempdb; GO CREATE PROCEDURE loginfo AS BEGIN SET NOCOUNT ON; DBCC LOGINFO(); END GO SELECT * INTO tempdb.dbo.loginfo_output FROM OPENQUERY(LOOPBACK, 'SET FMTONLY OFF; EXEC tempdb.dbo.loginfo'); DROP PROCEDURE loginfo; GO
Msg 11528, Level 16, State 1, Procedure sp_describe_first_result_set, Line 1 The metadata could not be determined because statement 'DBCC LOGINFO();' in procedure 'loginfo' does not support metadata discovery.
This behaviour has to do with the way SQL Server 2012 tries to discover metadata at parse/bind time, when the resultset is not available yet for DBCC commands.
Fortunately, there is still a way to discover metadata when you have a SQL Server instance of a previous version available.
On my laptop I have a 2008R2 instance I can use to query the 2012 instance with a linked server:
-- run this on the 2008R2 instance USE [master] GO EXEC master.dbo.sp_addlinkedserver @server = N'LOCALHOST\SQL2012' ,@srvproduct = N'SQL Server' EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname = N'LOCALHOST\SQL2012' ,@useself = N'True' ,@locallogin = NULL ,@rmtuser = NULL ,@rmtpassword = NULL GO SELECT * INTO tempdb.dbo.loginfo_output FROM OPENQUERY([LOCALHOST\SQL2012], 'SET FMTONLY OFF; EXEC tempdb.dbo.loginfo'); GO
This code pipes the results of the DBCC command into a table in the tempdb database in my 2008R2 instance. The table can now be scripted using SSMS:
Using the 2008R2 instance as a “Trojan Horse” for the metadata discovery, you can see that the resultset definition of DBCC LOGINFO() has changed again in SQL Server 2012:
CREATE TABLE [dbo].[loginfo_output]( [RecoveryUnitId] [int] NULL, -- new in SQL2012 [FileId] [int] NULL, [FileSize] [bigint] NULL, [StartOffset] [bigint] NULL, [FSeqNo] [int] NULL, [Status] [int] NULL, [Parity] [tinyint] NULL, [CreateLSN] [numeric](25, 0) NULL )
This trick will be particularly useful for an upcoming (and long overdue) post, so… stay tuned!
Extracting DACPACs from all databases with Powershell
If you are adopting Sql Server Data Tools as your election tool to maintain database projects under source control and achieve an ALM solution, at some stage you will probably want to import all your databases in SSDT.
Yes, it can be done by hand, one at a time, using either the “import live database” or “schema compare” features, but what I have found to be more convenient is the “import dacpac” feature.
Basically, you can extract a dacpac from a live database and then import it in SSDT, entering some options in the import dialog.
The main reason why I prefer this method is the reduced amount of manual steps involved. Moreover, the dacpac extraction process can be fully automated using sqlpackage.exe.
Recently I had to import a lot of databases in SSDT and found that sqlpackage can be used in a PowerShell script to automate the process even further:
# # Extract DACPACs from all databases # # Author: Gianluca Sartori - @spaghettidba # Date: 2013/02/13 # Purpose: # Loop through all user databases and extract # a DACPAC file in the working directory # # Param( [Parameter(Position=0,Mandatory=$true)] [string]$ServerName ) cls try { if((Get-PSSnapin -Name SQlServerCmdletSnapin100 -ErrorAction SilentlyContinue) -eq $null){ Add-PSSnapin SQlServerCmdletSnapin100 } } catch { Write-Error "This script requires the SQLServerCmdletSnapIn100 snapin" exit } # # Gather working directory (script path) # $script_path = Split-Path -Parent $MyInvocation.MyCommand.Definition $sql = " SELECT name FROM sys.databases WHERE name NOT IN ('master', 'model', 'msdb', 'tempdb','distribution') " $data = Invoke-sqlcmd -Query $sql -ServerInstance $ServerName -Database master $data | ForEach-Object { $DatabaseName = $_.name # # Run sqlpackage # &"C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" ` /Action:extract ` /SourceServerName:$ServerName ` /SourceDatabaseName:$DatabaseName ` /TargetFile:$script_path\DACPACs\$DatabaseName.dacpac ` /p:ExtractReferencedServerScopedElements=False ` /p:IgnorePermissions=False }
It’s a very simple script indeed, but it saved me a lot of time and I wanted to share it with you.
Unfortunately, there is no way to automate the import process in SSDT, but looks like Microsoft is actually looking into making this feature availabe in a future version.
Using QUERYTRACEON in plan guides
Yesterday the CSS team made the QUERYTRACEON hint publicly documented.
This means that now it’s officially supported and you can use it in production code.
After reading the post on the CSS blog, I started to wonder whether there is some actual use in production for this query hint, given that it requires the same privileges as DBCC TRACEON, which means you have to be a member of the sysadmin role.
In fact, if you try to use that hint when connected as a low privileged user, you get a very precise error message, that leaves no room for interpretation:
SELECT * FROM [AdventureWorks2012].[Person].[Person] OPTION (QUERYTRACEON 4199)
Msg 2571, Level 14, State 3, Line 1
User ‘guest’ does not have permission to run DBCC TRACEON.
How can a query hint available to sysadmins only be possibly useful for production?
I posted my doubt on Twitter using the #sqlhelp hashtag and got interesting replies from Paul Randal, Paul White and Robert Davis.
My concerns were not about the usefulness of the hint per se, but about the usefulness in production code. Often 140 chars are not enough when you want to express your thoughts clearly, in fact I decided to write this blog post to clarify what I mean.
As we have seen, the QUERYTRACEON query hint cannot be used directly by users not in the sysadmin role, but it can be used in stored procedures with “EXECUTE AS owner” and in plan guides.
While it’s completely clear what happens when the hint is used in procedures executed in the context of the owner, what happens in plan guides is not so obvious (at least, not to me). In fact, given that the secuirty context is not changed when the plan guide is matched and applied, I would have expected it to fail miserably when executed by a low privileged user, but it’s not the case.
Let’s try and see what happens:
First of all we need a query “complex enough” to let the optimizer take plan guides into account. A straight “SELECT * FROM table” and anything else that results in a trivial plan won’t be enough.
SELECT * FROM [Person].[Person] AS P INNER JOIN [Person].[PersonPhone] AS H ON P.BusinessEntityID = H.BusinessEntityID INNER JOIN [Person].[BusinessEntity] AS BE ON P.BusinessEntityID = BE.BusinessEntityID INNER JOIN [Person].[BusinessEntityAddress] AS BEA ON BE.BusinessEntityID = BEA.BusinessEntityID WHERE BEA.ModifiedDate > '20080101'
Then we need a plan guide to apply the QUERYTRACEON hint:
EXEC sp_create_plan_guide @name = N'[querytraceon]', @stmt = N'SELECT * FROM [Person].[Person] AS P INNER JOIN [Person].[PersonPhone] AS H ON P.BusinessEntityID = H.BusinessEntityID INNER JOIN [Person].[BusinessEntity] AS BE ON P.BusinessEntityID = BE.BusinessEntityID INNER JOIN [Person].[BusinessEntityAddress] AS BEA ON BE.BusinessEntityID = BEA.BusinessEntityID WHERE BEA.ModifiedDate > ''20080101''', @type = N'SQL', @hints = N'OPTION (QUERYTRACEON 4199)'
If we enable the plan guide and try to issue this query in the context of a low privileged user, we can see no errors thrown any more:
CREATE LOGIN testlogin WITH PASSWORD = 'testlogin123'; GO USE AdventureWorks2012; GO CREATE USER testlogin FOR LOGIN testlogin; GO GRANT SELECT TO testlogin; GO EXECUTE AS USER = 'testlogin'; GO SELECT * FROM [Person].[Person] AS P INNER JOIN [Person].[PersonPhone] AS H ON P.BusinessEntityID = H.BusinessEntityID INNER JOIN [Person].[BusinessEntity] AS BE ON P.BusinessEntityID = BE.BusinessEntityID INNER JOIN [Person].[BusinessEntityAddress] AS BEA ON BE.BusinessEntityID = BEA.BusinessEntityID WHERE BEA.ModifiedDate > '20080101'; GO REVERT; GO
If we open a profiler trace and capture the “Plan Guide Successful” and “Plan Guide Unsuccessful” events, we can see that the optimizer matches the plan guide and enforces the use of the query hint.
Lesson learned: even if users are not allowed to issue that particular query hint directly, adding it to a plan guide is a way to let anyone use it indirectly.
Bottom line is OPTION QUERYTRACEON can indeed be very useful when we identify some queries that obtain a decent query plan only when a specific trace flag is active and we don’t want to enable it for the whole instance. In those cases, a plan guide or a stored procedure in the owner’s context can be the answer.
Manual Log Shipping with PowerShell
Recently I had to implement log shipping as a HA strategy for a set of databases which were originally running under the simple recovery model.
Actually, the databases were subscribers for a merge publication, which leaves database mirroring out of the possible HA options. Clustering was not an option either, due to lack of shared storage at the subscribers.
After turning all databases to full recovery model and setting up log shipping, I started to wonder if there was a better way to implement it. Log shipping provides lots of flexibility, which I didn’t need: I just had to ship the transaction log from the primary to a single secondary and have transaction logs restored immediately. Preserving transaction log backups was not needed, because the secondary database was considered a sufficient backup in this case.
Another thing that I observed was the insane amount of memory consumed by SQLLogShip.exe (over 300 MB), which ended up even failing due to OutOfMemoryException at times.
After reading Edwin Sarmiento‘s fine chapter on SQL Server MVP Deep Dives “The poor man’s SQL Server log shipping”, some ideas started to flow.
First of all I needed a table to hold the configuration for my manual log shipping:
-- ============================================= -- Author: Gianluca Sartori - @spaghettidba -- Create date: 2013-02-07 -- Description: Creates a table to hold manual -- log shipping configuration -- ============================================= CREATE TABLE msdb.dbo.ManualLogShippingConfig ( secondary sysname PRIMARY KEY CLUSTERED, -- Name of the secondary SQL Server instance sharedBackupFolder varchar(255), -- UNC path to the backup path on the secondary remoteBackupFolder varchar(255) -- Path to the backup folder on the secondary -- It's the same path as sharedBackupFolder, -- as seen from the secondary server ) GO INSERT INTO msdb.dbo.ManualLogShippingConfig ( secondary, sharedBackupFolder, remoteBackupFolder ) VALUES ( 'SomeServer', '\\SomeShare', 'Local path of SomeShare on secondary' ) GO
And then I just needed a PowerShell script to do the heavy lifting.
I think the code is commented and readable enough to show what happens behind the scenes.
## ============================================= ## Author: Gianluca Sartori - @spaghettidba ## Create date: 2013-02-07 ## Description: Ships the log to a secondary server ## ============================================= sl c:\ $ErrorActionPreference = "Stop" $primary = "$(ESCAPE_DQUOTE(SRVR))" # # Read Configuration from the table in msdb # $SQL_Config = @" SELECT * FROM msdb.dbo.ManualLogShippingConfig "@ $info = Invoke-sqlcmd -Query $SQL_Config -ServerInstance $primary $secondary = $info.secondary $sharedFolder = $info.sharedBackupFolder $remoteSharedFolder = $info.remoteBackupFolder $ts = Get-Date -Format yyyyMMddHHmmss # # Read default backup path of the primary from the registry # $SQL_BackupDirectory = @" EXEC master.dbo.xp_instance_regread N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'BackupDirectory' "@ $info = Invoke-sqlcmd -Query $SQL_BackupDirectory -ServerInstance $primary $BackupDirectory = $info.Data # # Ship the log of all databases in FULL recovery model # You can change this to ship a single database's log # $SQL_FullRecoveryDatabases = @" SELECT name FROM master.sys.databases WHERE recovery_model_desc = 'FULL' AND name NOT IN ('master', 'model', 'msdb', 'tempdb') "@ $info = Invoke-sqlcmd -Query $SQL_FullRecoveryDatabases -ServerInstance $primary $info | ForEach-Object { $DatabaseName = $_.Name Write-Output "Processing database $DatabaseName" $BackupFile = $DatabaseName + "_" + $ts + ".trn" $BackupPath = Join-Path $BackupDirectory $BackupFile $RemoteBackupPath = Join-Path $remoteSharedFolder $BackupFile $SQL_BackupDatabase = "BACKUP LOG $DatabaseName TO DISK='$BackupPath' WITH INIT;" $SQL_NonCopiedBackups = " SELECT physical_device_name FROM msdb.dbo.backupset AS BS INNER JOIN msdb.dbo.backupmediaset AS BMS ON BS.media_set_id = BMS.media_set_id INNER JOIN msdb.dbo.backupmediafamily AS BMF ON BMS.media_set_id = BMF.media_set_id WHERE BS.database_name = '$DatabaseName' AND BS.type = 'L' AND expiration_date IS NULL ORDER BY BS.backup_start_date " # # Backup log to local path # Invoke-Sqlcmd -Query $SQL_BackupDatabase -ServerInstance $primary -QueryTimeout 65535 Write-Output "LOG backed up to $BackupPath" # # Query noncopied backups... # $nonCopiedBackups = Invoke-Sqlcmd -Query $SQL_NonCopiedBackups -ServerInstance $primary $nonCopiedBackups | ForEach-Object { $BackupPath = $_.physical_device_name $BackupFile = Split-Path $BackupPath -Leaf $RemoteBackupPath = Join-Path $remoteSharedFolder $BackupFile $SQL_RestoreDatabase = " RESTORE LOG $DatabaseName FROM DISK='$RemoteBackupPath' WITH NORECOVERY; " $SQL_ExpireBackupSet = " UPDATE BS SET expiration_date = GETDATE() FROM msdb.dbo.backupset AS BS INNER JOIN msdb.dbo.backupmediaset AS BMS ON BS.media_set_id = BMS.media_set_id INNER JOIN msdb.dbo.backupmediafamily AS BMF ON BMS.media_set_id = BMF.media_set_id WHERE BS.database_name = '$DatabaseName' AND BS.type = 'L' AND physical_device_name = '$BackupPath' " # # Move the transaction log backup to the secondary # if (Test-Path $BackupPath) { Write-Output "Moving $BackupPath to $sharedFolder" Move-Item -Path ("Microsoft.PowerShell.Core\FileSystem::" + $BackupPath) -Destination ("Microsoft.PowerShell.Core\FileSystem::" + $sharedFolder) -Force } # # Restore the backup on the secondary # Invoke-Sqlcmd -Query $SQL_RestoreDatabase -ServerInstance $secondary -QueryTimeout 65535 Write-Output "Restored LOG from $RemoteBackupPath" # # Delete the backup file # Write-Output "Deleting $RemoteBackupPath" Remove-Item $RemoteBackupPath -ErrorAction SilentlyContinue # # Mark the backup as expired # Write-Output "Expiring backup set $BackupPath" Invoke-Sqlcmd -Query $SQL_ExpireBackupSet -ServerInstance $primary } }
The script can be used in a SQLAgent PowerShell job step and it’s all you need to start shipping your transaction logs.
Obviously, you need to take a full backup on the primary server and restore it to the secondary WITH NORECOVERY.
Once you’re ready, you can schedule the job to ship the transaction logs.