Category Archives: T-SQL
As a DBA, you probably have your toolbox full of scripts, procedures and functions that you use for the day-to-day administration of your instances.
I’m no exception and my hard drive is full of scripts that I tend to accumulate and never throw away, even if I know I will never need (or find?) them again.
However, my preferred way to organize and maintain my administration scripts is a database called “TOOLS”, which contains all the scripts I regularly use.
One of the challenges involved in keeping the scripts in a database rather than in a script file is the inability to choose a database context for the execution. When a statement is encapsulated in a view, function or stored procedure in a database, every reference to a database-specific object is limited to the database that contains the programmable object itself. The only way to overcome this limitation is the use of dynamic sql.
For instance, if I want to query the name of the tables in a database, I can use the following statement:
SELECT name FROM sys.tables
The statement references a catalog view specific to a single database, so if I enclose it in a stored procedure, the table names returned by this query are those found in the database that contains the stored procedure itself:
USE TOOLS; GO CREATE PROCEDURE getTableNames AS SELECT name FROM sys.tables; GO EXEC getTableNames;
This is usually not an issue, since most stored procedures will not cross the boundaries of the database they are created in. Administration scripts are different, because they are meant as a single entry point to maintain the whole SQL server instance.
In order to let the statement work against a different database, you need to choose one of the following solutions:
- dynamic SQL
- marking as a system object
- … an alternative way
Each of these techniques has its PROs and its CONs and I will try to describe them in this post.
1. Dynamic SQL
It’s probably the easiest way to solve the issue: you just have to concatenate the database name to the objects names.
USE TOOLS; GO ALTER PROCEDURE getTableNames @db_name sysname AS BEGIN DECLARE @sql nvarchar(max) SET @sql = 'SELECT name FROM '+ QUOTENAME(@db_name) +'.sys.tables'; EXEC(@sql) END GO EXEC getTableNames 'msdb';
- very easy to implement for simple statements
- can rapidly turn to a nightmare with big, complicated statements, as each object must be concatenated with the database name. Different objects have different ways to be related to the database: tables and views can be concatenated directly, while functions such as OBJECT_NAME accept an additional parameter to specify the database name.
- the statement has to be treated as a string and enclosed in quotes, which means that:
- quotes must be escaped, and escaped quotes must be escaped again and escaped and re-escaped quotes… ok, you know what I mean
- no development aids such as intellisense, just-in-time syntax checks and syntax coloring
It’s a neater way to avoid concatenating the database name to each object referenced in the statement.
USE TOOLS; GO ALTER PROCEDURE getTableNames @db_name sysname AS BEGIN -- use a @sql variable to store the whole query -- without concatenating the database name DECLARE @sql nvarchar(max); SET @sql = 'SELECT name FROM sys.tables'; -- concatenate the database name to the -- sp_executesql call, just once DECLARE @cmd nvarchar(max); SET @cmd = 'EXEC '+ QUOTENAME(@db_name) +'.sys.sp_executesql @sql'; EXEC sp_executesql @cmd, N'@sql nvarchar(max)', @sql END GO EXEC getTableNames 'msdb';
- the dynamic sql is taken as a whole and does not need to be cluttered with multiple concatenations
- needs some more work than a straight concatenation and can be seen as “obscure”
- suffers from the same issues found with plain dynamic sql, because the statement is, again, treated as a string
3. System object
Nice and easy: every stored procedure you create in the master database with the “sp_” prefix can be executed from any database context.
Using the undocumented stored procedure sp_MS_marksystemobject you can also mark the stored procedure as a “system object” and let it reference the tables in the database from which it is invoked.
USE master; GO ALTER PROCEDURE sp_getTableNames AS BEGIN SELECT name FROM sys.tables END GO EXEC sys.sp_MS_marksystemobject 'sp_getTableNames' GO USE msdb; GO EXEC sp_getTableNames;
- no need to use dynamic sql
- requires creating objects in the “master” database, which is something I tend to avoid
- works with stored procedures only (actually, it works with other objects, such as tables and views, but you have to use the “sp_” prefix. The day I will find a view named “sp_getTableNames” in the master database it won’t be safe to stay near me)
An alternative method:
It would be really helpful if we could store the statement we want to execute inside an object that doesn’t involve dynamic sql and doesn’t need to be stored in the master database. In other words, we need a way to get the best of both worlds.
Is there such a solution? Apparently, there isn’t.
The ideal object to store a statement and reuse it later is a view, but there is no way to “execute” a view against a different database. In fact you don’t execute a view, you just select from it, which is quite a different thing.
What you “execute” when you select from a view is the statement in its definition (not really, but let me simplify).
So, what we would need to do is just read the definition from a view and use the statement against the target database. Sounds straightforward, but it’s not.
The definition of a view also contains the “CREATE VIEW” statement and stripping it off is not just as easy as it seems.
Let’s see the issue with an example: I will create a view to query the last update date of the index statistics in a database, using the query from Glenn Berry’s Diagnostic Queries.
USE TOOLS; GO -- When were Statistics last updated on all indexes? (Query 48) CREATE VIEW statisticsLastUpdate AS SELECT DB_NAME() AS database_name ,o.NAME AS stat_name ,i.NAME AS [Index Name] ,STATS_DATE(i.[object_id], i.index_id) AS [Statistics Date] ,s.auto_created ,s.no_recompute ,s.user_created ,st.row_count FROM sys.objects AS o WITH (NOLOCK) INNER JOIN sys.indexes AS i WITH (NOLOCK) ON o.[object_id] = i.[object_id] INNER JOIN sys.stats AS s WITH (NOLOCK) ON i.[object_id] = s.[object_id] AND i.index_id = s.stats_id INNER JOIN sys.dm_db_partition_stats AS st WITH (NOLOCK) ON o.[object_id] = st.[object_id] AND i.[index_id] = st.[index_id] WHERE o.[type] = 'U';
I just had to remove ORDER BY and OPTION(RECOMPILE) because query hints cannot be used in views.
Querying the object definition returns the whole definition of the view, not only the SELECT statement:
SELECT OBJECT_DEFINITION(OBJECT_ID('statisticsLastUpdate')) AS definition
definition ------------------------------------------------------------------- -- When were Statistics last updated on all indexes? (Query 48) CREATE VIEW statisticsLastUpdate AS SELECT DB_NAME() AS database_name ,o.NAME AS stat_name ,i.NAME AS [Index Name] ,STATS_DATE(i.[object_id], i.index_id) AS [Statistics Date] ,s.auto_created ,s.no_recompute (1 row(s) affected)
In order to extract the SELECT statement, we would need something able to parse (properly!) the view definition and we all know how complex it can be.
Fortunately, SQL Server ships with an undocumented function used in replication that can help solving the problem: its name is fn_replgetparsedddlcmd.
This function accepts some parameters, lightly documented in the code: fn_replgetparsedddlcmd (@ddlcmd, @FirstToken, @objectType, @dbname, @owner, @objname, @targetobject)
Going back to the example, we can use this function to extract the SELECT statement from the view definition:
SELECT master.sys.fn_replgetparsedddlcmd( OBJECT_DEFINITION(OBJECT_ID('statisticsLastUpdate')) ,'CREATE' ,'VIEW' ,DB_NAME() ,'dbo' ,'statisticsLastUpdate' ,NULL ) AS statement
statement --------------------------------------------------------------------- AS SELECT DB_NAME() AS database_name ,o.NAME AS stat_name ,i.NAME AS [Index Name] ,STATS_DATE(i.[object_id], i.index_id) AS [Statistics Date] ,s.auto_created ,s.no_recompute ,s.user_created ,st.row_count (1 row(s) affected)
The text returned by the function still contains the “AS” keyword, but removing it is a no-brainer:
DECLARE @cmd nvarchar(max) SELECT @cmd = master.sys.fn_replgetparsedddlcmd( OBJECT_DEFINITION(OBJECT_ID('statisticsLastUpdate')) ,'CREATE' ,'VIEW' ,DB_NAME() ,'dbo' ,'statisticsLastUpdate' ,NULL ) SELECT @cmd = RIGHT(@cmd, LEN(@cmd) - 2) -- Removes "AS" SELECT @cmd AS statement
statement ------------------------------------------------------------------- SELECT DB_NAME() AS database_name ,o.NAME AS stat_name ,i.NAME AS [Index Name] ,STATS_DATE(i.[object_id], i.index_id) AS [Statistics Date] ,s.auto_created ,s.no_recompute ,s.user_created ,st.row_count (1 row(s) affected)
Now that we are able to read the SELECT statement from a view’s definition, we can execute that statement against any database we like, or even against all the databases in the instance.
-- ============================================= -- Author: Gianluca Sartori - spaghettidba -- Create date: 2013-04-16 -- Description: Extracts the view definition -- and runs the statement in the -- database specified by @db_name -- If the target database is a pattern, -- the statement gets executed against -- all databases matching the pattern. -- ============================================= CREATE PROCEDURE [dba_execute_view] @view_name sysname ,@db_name sysname AS BEGIN SET NOCOUNT, XACT_ABORT, QUOTED_IDENTIFIER, ANSI_NULLS, ANSI_PADDING, ANSI_WARNINGS, ARITHABORT, CONCAT_NULL_YIELDS_NULL ON; SET NUMERIC_ROUNDABORT OFF; DECLARE @cmd nvarchar(max) DECLARE @sql nvarchar(max) DECLARE @vw_schema sysname DECLARE @vw_name sysname IF OBJECT_ID(@view_name) IS NULL BEGIN RAISERROR('No suitable object found for name %s',16,1,@view_name) RETURN END IF DB_ID(@db_name) IS NULL AND @db_name NOT IN ('[USER]','[SYSTEM]') AND @db_name IS NOT NULL BEGIN RAISERROR('No suitable database found for name %s',16,1,@view_name) RETURN END SELECT @vw_schema = OBJECT_SCHEMA_NAME(OBJECT_ID(@view_name)), @vw_name = OBJECT_NAME(OBJECT_ID(@view_name)) SELECT @cmd = master.sys.fn_replgetparsedddlcmd( OBJECT_DEFINITION(OBJECT_ID(@view_name)) ,'CREATE' ,'VIEW' ,DB_NAME() ,@vw_schema ,@vw_name ,NULL ) SELECT @cmd = RIGHT(@cmd, LEN(@cmd) - 2) -- Removes "AS" -- CREATE A TARGET TEMP TABLE SET @sql = N' SELECT TOP(0) * INTO #results FROM ' + @view_name + '; INSERT #results EXEC [dba_ForEachDB] @statement = @cmd, @name_pattern = @db_name; SELECT * FROM #results;' EXEC sp_executesql @sql ,N'@cmd nvarchar(max), @db_name sysname' ,@cmd ,@db_name END
The procedure depends on dba_ForEachDB, the stored procedure I posted a couple of years ago that replaces the one shipped by Microsoft. If you still prefer their version, you’re free to modify the code as you wish.
Now that we have a stored procedure that “executes” a view, we can use it to query statistics update information from a different database:
EXEC [dba_execute_view] 'statisticsLastUpdate', 'msdb'
We could also query the same information from all user databases:
EXEC [dba_execute_view] 'statisticsLastUpdate', '[USER]'
That’s it, very easy and straightforward.
Just one suggestion for the SELECT statements in the views: add a DB_NAME() column, in order to understand where the data comes from, or it’s going to be a total mess.
This is just the basic idea, the code can be improved in many ways.
For instance, we could add a parameter to decide whether the results must be piped to a temporary table or not. As you probably know, INSERT…EXEC cannot be nested, so you might want to pipe the results to a table in a different way.
Another thing you might want to add is the ability to order the results according to an additional parameter.
To sum it up, with a little help from Microsoft, we can now safely create a database packed with all our administration stuff and execute the queries against any database in our instance.
Recently I had to assess and tune quite a lot of SQL Server instances and one the things that are often overlooked is the location of the system databases.
I often see instance where the system databases are located in the system drives under the SQL Server default installation path, which is bad for many reasons, especially for tempdb.
I had to move the system databases so many times that I ended up coding a script to automate the process.
The script finds all system databases that are not sitting in the default data and log paths and issues the ALTER DATABASE statements needed to move the files to the default paths.
Obviously, to let the script work, the default data and log paths must have been set in the instance properties:
You may also point out that moving all system databases to the default data and log paths is not always a good idea. And you would be right: for instance, if possible, the tempdb database should be working on a fast dedicated disk. However, very often I find myself dealing with low-end servers where separate data and log disks are a luxury, not to mention a dedicated tempdb disk. If you are concerned about moving tempd to the default data and log paths, you can modify the script accordingly.
-- ============================================= -- Author: Gianluca Sartori - spaghettidba -- Create date: 2013-03-22 -- Description: Moves the system databases to the -- default data and log paths and -- updates SQL Server startup params -- accordingly. -- ============================================= SET NOCOUNT ON; USE master; -- Find default data and log paths -- reading from the registry DECLARE @defaultDataLocation nvarchar(4000) DECLARE @defaultLogLocation nvarchar(4000) EXEC master.dbo.xp_instance_regread N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'DefaultData', @defaultDataLocation OUTPUT EXEC master.dbo.xp_instance_regread N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'DefaultLog', @defaultLogLocation OUTPUT -- Loop through all system databases -- and move to the default data and log paths DECLARE @sql nvarchar(max) DECLARE stmts CURSOR STATIC LOCAL FORWARD_ONLY FOR SELECT ' ALTER DATABASE '+ DB_NAME(database_id) + ' MODIFY FILE ( ' + ' NAME = '''+ name +''', ' + ' FILENAME = '''+ CASE type_desc WHEN 'ROWS' THEN @defaultDataLocation ELSE @defaultLogLocation END + '\'+ RIGHT(physical_name,CHARINDEX('\',REVERSE(physical_name),1)-1) +'''' + ' )' FROM sys.master_files WHERE DB_NAME(database_id) IN ('master','model','msdb','tempdb') AND ( physical_name NOT LIKE @defaultDataLocation + '%' OR physical_name NOT LIKE @defaultLogLocation + '%' ) OPEN stmts FETCH NEXT FROM stmts INTO @sql WHILE @@FETCH_STATUS = 0 BEGIN PRINT @sql EXEC(@sql) FETCH NEXT FROM stmts INTO @sql END CLOSE stmts DEALLOCATE stmts -- Update SQL Server startup parameters -- to reflect the new master data and log -- files locations DECLARE @val nvarchar(500) DECLARE @key nvarchar(100) DECLARE @regvalues TABLE ( parameter nvarchar(100), value nvarchar(500) ) INSERT @regvalues EXEC master.dbo.xp_instance_regenumvalues N'HKEY_LOCAL_MACHINE', N'SOFTWARE\Microsoft\MSSQLServer\MSSQLServer\Parameters' DECLARE reg CURSOR STATIC LOCAL FORWARD_ONLY FOR SELECT * FROM @regvalues WHERE value LIKE '-d%' OR value LIKE '-l%' OPEN reg FETCH NEXT FROM reg INTO @key, @val WHILE @@FETCH_STATUS = 0 BEGIN IF @val LIKE '-d%' SET @val = '-d' + ( SELECT physical_name FROM sys.master_files WHERE DB_NAME(database_id) = 'master' AND type_desc = 'ROWS' ) IF @val LIKE '-l%' SET @val = '-l' + ( SELECT physical_name FROM sys.master_files WHERE DB_NAME(database_id) = 'master' AND type_desc = 'LOG' ) EXEC master.dbo.xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'SOFTWARE\Microsoft\MSSQLServer\MSSQLServer\Parameters', @key, N'REG_SZ', @val FETCH NEXT FROM reg INTO @key, @val END CLOSE reg DEALLOCATE reg
After running this script, you can shut down the SQL Server service and move the data and log files to the appropriate locations.
When the files are ready, you can bring SQL Server back online.
BE CAREFUL! Before running this script against a clustered instance, check what the xp_instance_regread commands return: I have seen cases with SQL Server not reading from the appropriate keys.
If you are one among the many that downloaded my consistency check stored procedure called “dba_RunCHECKDB”, you may have noticed a “small” glitch… it doesn’t work on SQL Server 2012!
This is due to the resultset definition of DBCC CHECKDB, which has changed again in SQL Server 2012. Trying to pipe the results of that command in the table definition for SQL Server 2008 produces a column mismatch and it obviously fails.
Fixing the code is very easy indeed, but I could never find the time to post the corrected version until today.
Also, I had to discover the new table definition for DBCC CHECKDB, and it was not just as easy as it used to be in SQL Server 2008. In fact, a couple of days ago I posted a way to discover the new resultset definition working around the cumbersome metadata discovery feature introduced in SQL Server 2012.
Basically, the new output of DBCC CHECKDB now includes 6 new columns:
CREATE TABLE ##DBCC_OUTPUT( Error int NULL, [Level] int NULL, State int NULL, MessageText nvarchar(2048) NULL, RepairLevel nvarchar(22) NULL, Status int NULL, DbId int NULL, -- was smallint in SQL2005 DbFragId int NULL, -- new in SQL2012 ObjectId int NULL, IndexId int NULL, PartitionId bigint NULL, AllocUnitId bigint NULL, RidDbId smallint NULL, -- new in SQL2012 RidPruId smallint NULL, -- new in SQL2012 [File] smallint NULL, Page int NULL, Slot int NULL, RefDbId smallint NULL, -- new in SQL2012 RefPruId smallint NULL, -- new in SQL2012 RefFile smallint NULL, -- new in SQL2012 RefPage int NULL, RefSlot int NULL, Allocation smallint NULL )
If you Google the name of one of these new columns, you will probably find a lot of blog posts (no official documentation, unfortunately) that describes the new output of DBCC CHECKDB, but none of them is strictly correct: all of them indicate the smallint columns as int.
Not a big deal, actually, but still incorrect.
As usual, suggestions and comments are welcome.
This means that now it’s officially supported and you can use it in production code.
After reading the post on the CSS blog, I started to wonder whether there is some actual use in production for this query hint, given that it requires the same privileges as DBCC TRACEON, which means you have to be a member of the sysadmin role.
In fact, if you try to use that hint when connected as a low privileged user, you get a very precise error message, that leaves no room for interpretation:
SELECT * FROM [AdventureWorks2012].[Person].[Person] OPTION (QUERYTRACEON 4199)
Msg 2571, Level 14, State 3, Line 1
User ‘guest’ does not have permission to run DBCC TRACEON.
How can a query hint available to sysadmins only be possibly useful for production?
My concerns were not about the usefulness of the hint per se, but about the usefulness in production code. Often 140 chars are not enough when you want to express your thoughts clearly, in fact I decided to write this blog post to clarify what I mean.
As we have seen, the QUERYTRACEON query hint cannot be used directly by users not in the sysadmin role, but it can be used in stored procedures with “EXECUTE AS owner” and in plan guides.
While it’s completely clear what happens when the hint is used in procedures executed in the context of the owner, what happens in plan guides is not so obvious (at least, not to me). In fact, given that the secuirty context is not changed when the plan guide is matched and applied, I would have expected it to fail miserably when executed by a low privileged user, but it’s not the case.
Let’s try and see what happens:
First of all we need a query “complex enough” to let the optimizer take plan guides into account. A straight “SELECT * FROM table” and anything else that results in a trivial plan won’t be enough.
SELECT * FROM [Person].[Person] AS P INNER JOIN [Person].[PersonPhone] AS H ON P.BusinessEntityID = H.BusinessEntityID INNER JOIN [Person].[BusinessEntity] AS BE ON P.BusinessEntityID = BE.BusinessEntityID INNER JOIN [Person].[BusinessEntityAddress] AS BEA ON BE.BusinessEntityID = BEA.BusinessEntityID WHERE BEA.ModifiedDate > '20080101'
Then we need a plan guide to apply the QUERYTRACEON hint:
EXEC sp_create_plan_guide @name = N'[querytraceon]', @stmt = N'SELECT * FROM [Person].[Person] AS P INNER JOIN [Person].[PersonPhone] AS H ON P.BusinessEntityID = H.BusinessEntityID INNER JOIN [Person].[BusinessEntity] AS BE ON P.BusinessEntityID = BE.BusinessEntityID INNER JOIN [Person].[BusinessEntityAddress] AS BEA ON BE.BusinessEntityID = BEA.BusinessEntityID WHERE BEA.ModifiedDate > ''20080101''', @type = N'SQL', @hints = N'OPTION (QUERYTRACEON 4199)'
If we enable the plan guide and try to issue this query in the context of a low privileged user, we can see no errors thrown any more:
CREATE LOGIN testlogin WITH PASSWORD = 'testlogin123'; GO USE AdventureWorks2012; GO CREATE USER testlogin FOR LOGIN testlogin; GO GRANT SELECT TO testlogin; GO EXECUTE AS USER = 'testlogin'; GO SELECT * FROM [Person].[Person] AS P INNER JOIN [Person].[PersonPhone] AS H ON P.BusinessEntityID = H.BusinessEntityID INNER JOIN [Person].[BusinessEntity] AS BE ON P.BusinessEntityID = BE.BusinessEntityID INNER JOIN [Person].[BusinessEntityAddress] AS BEA ON BE.BusinessEntityID = BEA.BusinessEntityID WHERE BEA.ModifiedDate > '20080101'; GO REVERT; GO
If we open a profiler trace and capture the “Plan Guide Successful” and “Plan Guide Unsuccessful” events, we can see that the optimizer matches the plan guide and enforces the use of the query hint.
Lesson learned: even if users are not allowed to issue that particular query hint directly, adding it to a plan guide is a way to let anyone use it indirectly.
Bottom line is OPTION QUERYTRACEON can indeed be very useful when we identify some queries that obtain a decent query plan only when a specific trace flag is active and we don’t want to enable it for the whole instance. In those cases, a plan guide or a stored procedure in the owner’s context can be the answer.
A couple of weeks ago I posted a method to convert trace files from the SQL Server 2012 format to the SQL Server 2008 format.
The trick works quite well and the trace file can be opened with Profiler or with ReadTrace from RML Utilities. What doesn’t seem to work just as well is the trace replay with Ostress (another great tool bundled in the RML Utilities).
For some reason, OStress refuses to replay the whole trace file and starts throwing lots of errors.
Some errors are due to the workload I was replaying (it contains CREATE TABLE statements and that can obviuosly work just the first time it is issued), but some others seem to be due to parsing errors, probably because of differences in the trace format between version 11 and 10.
11/20/12 12:30:39.008 [0x00001040] File C:\RML\SQL00063.rml: Parser Error: [Error: 60500][State: 1][Abs Char: 1068][Seq: 0] Syntax error [parse error, expecting `tok_RML_END_RPC'] encountered near 0x0000042C: 6C000D00 0A005700 48004500 52004500 l.....W.H.E.R.E. 0x0000043C: 20005500 6E006900 74005000 72006900 .U.n.i.t.P.r.i. 0x0000044C: 63006500 20002600 6C007400 3B002000 c.e. .&.l.t.;. . 0x0000045C: 24003500 2E003000 30000D00 0A004F00 $.5...0.0.....O. 0x0000046C: 52004400 45005200 20004200 59002000 R.D.E.R. .B.Y. . 0x0000047C: 50007200 6F006400 75006300 74004900 P.r.o.d.u.c.t.I. 0x0000048C: 44002C00 20004C00 69006E00 65005400 D.,. .L.i.n.e.T. 0x0000049C: 6F007400 61006C00 3B000D00 0A003C00 o.t.a.l.;.....<. 0x000004AC: 2F004300 4D004400 3E000D00 0A003C00 /.C.M.D.>.....<. 0x000004BC: 2F004C00 41004E00 47003E00 0D000A00 /.L.A.N.G.>..... 0x000004CC: 11/20/12 12:30:39.010 [0x00001040] File C:\RML\SQL00063.rml: Parser Error: [Error: 110010][State: 100][Abs Char: 1068][Seq: 0] SYNTAX ERROR: Parser is unable to safely recover. Correct the errors and try again.
The error suggests that the two formats are indeed more different than I supposed, thus making the replay with Ostress a bit unrealiable.
Are there other options?
Sure there are! Profiler is another tool that allows replaying the workload, even if some limitations apply. For instance, Profiler cannot be scripted, which is a huge limitation if you are using Ostress in benchmarking script and want to replace it with something else.
That “something else” could actually be the Distributed Replay feature introduced in SQL Server 2012.
Basically, Distributed Replay does the same things that Ostress does and even more, with the nice addition of the possibility to start the replay on multiple machines, thus simulating a workload that resembles more the one found in production.
An introduction to Distributed Replay can be found on Jonathan Kehayias’ blog and I will refrain from going into deep details here: those posts are outstanding and there’s very little I could add to that.
Installing the Distributed Replay feature
The first step for the installation is adding a new user for the distributed replay services. You could actually use separate accounts for the Controller and Client services, but for a quick demo a single user is enough.
The Distributed Replay Controller and Client features must be selected from the Feature Selection dialog of SQLServer setup:
In the next steps of the setup you will also be asked the service accounts to use for the services and on the Replay Client page you will have to enter the controller name and the working directories.
Once the setup is over, you will find two new services in the Server Manager:
After starting the services (first the Controller, then the Client), you can go to the log directories and check in the log files if everything is working.
The two files to check are in the following folders:
- C:\Program Files (x86)\Microsoft SQL Server\110\Tools\DReplayController\Log
- C:\Program Files (x86)\Microsoft SQL Server\110\Tools\DReplayClient\Log
Just to prove one more time that “if something can wrong, it will”, the client log will probably contain an obnoxious error message.
Setting up the distributed replay services can get tricky because of some permissions needed to let the client connect to the controller. Unsurprisingly, the client/controller communication is provided by DCOM, which must be configured correctly.
Without granting the appropriate permissions, in the distributed replay client log file you may find the following message:
2012-11-03 00:43:04:062 CRITICAL [Client Service] [0xC8100005 (6)] Failed to connect controller with error code 0x80070005.
In practical terms, the service account that executes the distributed replay controller service must be granted permissions to use the DCOM class locally and through the network:
- Run dcomcnfg.exe
- Navigate the tree to Console Root, Component Services, Computers, My Computer, DCOM Config, DReplayController
- Right click DReplayController and choose “properties” from the context menu.
- Click the Security tab
- Click the “Launch and Activation Permissions” edit button and grant “Local Activation” and “Remote Activation” permissions to the service account
- Click the “Access Permissions” edit button and grant “Local Access” and “Remote Access” permissions to the service account
- Add the service user account to the “Distributed COM Users” group
- Restart the distributed replay controller and client services
After restarting the services, you will find that the message in the log file has changed:
2012-11-20 14:01:10:783 OPERATIONAL [Client Service] Registered with controller "WIN2012_SQL2012".
Using the Replay feature
Once the services are successfully started, we can now start using the Distributed Replay feature.
The trace file has to meet the same requirements for replay found in Profiler, thus making the “Replay” trace template suitable for the job.
But there’s one more step needed before we can replay the trace file, which cannot be replayed directly. In fact, distributed replay needs to work on a trace stub, obtained preprocessing the original trace file.
The syntax to obtain the stub is the following:
"C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\dreplay.exe" preprocess -i "C:\SomePath\replay_trace.trc" -d "C:\SomePath\preprocessDir"
Now that the trace stub is ready, we can start the replay admin tool from the command line, using the following syntax:
"C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\dreplay.exe" replay -s "targetServerName" -d "C:\SomePath\preprocessDir" -w "list,of,allowed,client,names"
A final word
A comparison of the features found in the different replay tools can be found in the following table:
The Distributed Replay Controller can act as a replacement for Ostress, except for the ability to replay SQL and RML files.
Will we be using RML Utilities again in the future? Maybe: it depends on what Microsoft decides to do with this tool. It’s not unlikely that the Distributed Replay feature will replace the RML Utilities entirely. The tracing feature itself has an unceartain future ahead, with the deprecation in SQL Server 2012. Probably this new feature will disappear in the next versions of SQLServer, or it will be ported to the Extended Events instrastructure, who knows?
One thing is sure: today we have three tools that support replaying trace files and seeing this possibilty disappear in the future would be very disappointing. I’m sure SQL Server will never disappoint us.
Recently I started using RML utilities quite a lot.
ReadTrace and Ostress are awesome tools for benchmarking and baselining and many of the features found there have not been fully implemented in SQLServer 2012, though Distributed Replay was a nice addition.
However, as you may have noticed, ReadTrace is just unable to read trace files from SQLServer 2012, so you may get stuck with a trace file you wont’ abe able to process.
When I first hit this issue, I immediately thought I could use a trace table to store the data and then use Profiler again to write back to a trace file.
The idea wasn’t bad, but turns out that Profiler 2012 will always write trace files in the new format, with no way to specify the old one. On the other hand, Profiler2008R2 can’t read trace data from a table written by Profiler2012, throwing an ugly exception:
Interesting! So, looks like Profiler stores version information and other metadata somewhere in the trace table, but where exactly?
It might sound funny, but I had to trace Profiler with Profiler in order to know! Looking at the profiler trace, the first thing that Profiler does when trying to open a trace table is this:
declare @p1 int set @p1=180150003 declare @p3 int set @p3=2 declare @p4 int set @p4=4 declare @p5 int set @p5=-1 exec sp_cursoropen @p1 output,N'select BinaryData from [dbo].[trace_test] where RowNumber=0',@p3 output,@p4 output,@p5 output select @p1, @p3, @p4, @p5
So, looks like Profiler stores its metadata in the first row (RowNumber = 0), in binary format.
That was the clue I was looking for!
I loaded a trace file in the old format into another trace table and I started to compare the data to find similarities and differences.
I decided to break the binary headers into Dwords and paste the results in WinMerge to hunt the differences:
-- Break the header row in the trace table into DWords -- in order to compare easily in WinMerge SELECT SUBSTRING(data, 8 * (n - 1) + 3, 8) AS dword ,n AS dwordnum FROM ( SELECT CONVERT(VARCHAR(max), CAST(binarydata AS VARBINARY(max)), 1) AS data FROM tracetable WHERE rownumber = 0 ) AS src INNER JOIN ( SELECT DISTINCT ROW_NUMBER() OVER ( ORDER BY (SELECT NULL) ) / 8 AS n FROM sys.all_columns AS ac ) AS v ON n > 0 AND (n - 1) * 8 <= LEN(data) - 3 ORDER BY 2
If you copy/paste the output in WinMerge you can easily spot the difference around the 100th dword:
Hmmmm, seems promising. Can those “11″ and “10″ possibly represent the trace version? Let’s try and see.
Now we should just update that section of the header to let the magic happen:
-- Use a table variable to cast the trace -- header from image to varbinary(max) DECLARE @header TABLE ( header varbinary(max) ) -- insert the trace header into the table INSERT INTO @header SELECT binarydata FROM tracetable WHERE RowNumber = 0 -- update the byte at offset 390 with version 10 (SQLServer 2008) -- instead of version 11 (SQLServer 2012) UPDATE @header SET header .WRITE(0x0A,390,1) -- write the header back to the trace table UPDATE tracetable SET binarydata = (SELECT header FROM @header) WHERE RowNumber = 0
The trace table can now be opened with Profiler2008R2 and written back to a trace file. Hooray!
Yes, I know, using a trace table can take a whole lot of time and consume a lot of disk space when the file is huge (and typically RML traces are), but this is the best I could come up with.
I tried to look into the trace file itself, but I could not find a way to diff the binary contents in an editor. You may be smarter than me a give it a try: in that case, please, post a comment here.
Using this trick, ReadTrace can happily process the trace file and let you perform your benchmarks, at least until Microsoft decides to update RML Utilities to 2012.
UPDATE 11/08/2012: The use of a trace table is not necessary: the trace file can be updated in place, using the script found here.
It’s been quite a lot since I last posted on this blog and I apologize with my readers, both of them .
Today I would like to share with you a handy script I coded recently during a SQL Server health check. One of the tools I find immensely valuable for conducting a SQL Server assessment is Glenn Berry’s SQL Server Diagnostic Information Queries. The script contains several queries that can help you collect and analyze a whole lot of information about a SQL Server instance and I use it quite a lot.
The script comes with a blank results spreadsheet, that can be used to save the information gathered by the individual queries. Basically, the spreadsheet is organized in tabs, one for each query and has no preformatted column names, so that you can run the query, select the whole results grid, copy with headers and paste everything to the appropriate tab.
When working with multiple instances, SSMS can help automating this task with multiserver queries. Depending on your SSMS settings, the results of a multiserver query can be merged into a single grid, with an additional column holding the server name.
This feature is very handy, because it lets you run a statement against multiple servers without changing the statement itself.
This works very well for the queries in the first part of Glenn Berry’s script, which is dedicated to instance-level checks. The second part of the script is database-specific and you have to repeat the run+copy+paste process for each database in your instance.
It would be great if there was a feature in SSMS that allowed you to obtain the same results as the multiserver queries, scaled down to the database level. Unfortunately, SSMS has no such feature and the only possible solution is to code it yourself… or borrow my script!
Before rushing to the code, let’s describe briefly the idea behind and the challenges involved.
It would be quite easy to take a single statement and use it with sp_MsForEachDB, but this solution has several shortcomings:
- The results would display as individual grids
- There would be no easy way to determine which results grid belongs to which database
- The statement would have to be surrounded with quotes and existing quotes would have to be doubled, with an increased and unwanted complexity
The ideal tool for this task should simply take a statement and run it against all [user] databases without modifying the statement at all, merge the results in a single result set and add an additional column to hold the database name. Apparently, sp_MSForEachDB, besides being undocumented and potentially nasty, is not the right tool for the job.
That said, the only option left is to capture the statement from its query window, combining a trace, a loopback linked server and various other tricks.
Here’s the code:
-- ============================================= -- Author: Gianluca Sartori - @spaghettidba -- Create date: 2012-06-26 -- Description: Records statements to replay -- against all databases. -- ============================================= CREATE PROCEDURE replay_statements_on_each_db @action varchar(10) = 'RECORD', @start_statement_id int = NULL, @end_statement_id int = NULL AS BEGIN SET NOCOUNT ON; DECLARE @TraceFile nvarchar(256); DECLARE @TraceFileNoExt nvarchar(256); DECLARE @LastPathSeparator int; DECLARE @TracePath nvarchar(256); DECLARE @TraceID int; DECLARE @fs bigint = 5; DECLARE @r int; DECLARE @spiid int = @@SPID; DECLARE @srv nvarchar(4000); DECLARE @ErrorMessage nvarchar(4000); DECLARE @ErrorSeverity int; DECLARE @ErrorState int; DECLARE @sql nvarchar(max); DECLARE @statement nvarchar(max); DECLARE @column_list nvarchar(max); IF @action NOT IN ('RECORD','STOPRECORD','SHOWQUERY','REPLAY') RAISERROR('A valid @action (RECORD,STOPRECORD,SHOWQUERY,REPLAY) must be specified.',16,1) -- *********************************************** -- -- * RECORD * -- -- *********************************************** -- IF @action = 'RECORD' BEGIN BEGIN TRY -- Identify the path of the default trace SELECT @TraceFile = path FROM master.sys.traces WHERE id = 1 -- Split the directory / filename parts of the path SELECT @LastPathSeparator = MAX(number) FROM master.dbo.spt_values WHERE type = 'P' AND number BETWEEN 1 AND LEN(@tracefile) AND CHARINDEX('\', @TraceFile, number) = number --' fix WordPress's sql parser quirks' SELECT @TraceFile = SUBSTRING( @TraceFile ,1 ,@LastPathSeparator ) + 'REPLAY_' + CONVERT(char(8),GETDATE(),112) + REPLACE(CONVERT(varchar(8),GETDATE(),108),':','') + '.trc' SET @TraceFileNoExt = REPLACE(@TraceFile,N'.trc',N'') -- create trace EXEC sp_trace_create @TraceID OUTPUT, 0, @TraceFileNoExt, @fs, NULL; --add filters and events EXEC sp_trace_setevent @TraceID, 41, 1, 1; EXEC sp_trace_setevent @TraceID, 41, 12, 1; EXEC sp_trace_setevent @TraceID, 41, 13, 1; EXEC sp_trace_setfilter @TraceID, 1, 0, 7, N'%fn_trace_gettable%' EXEC sp_trace_setfilter @TraceID, 1, 0, 7, N'%replay_statements_on_each_db%' EXEC sp_trace_setfilter @TraceID, 12, 0, 0, @spiid --start the trace EXEC sp_trace_setstatus @TraceID, 1 --create a global temporary table to store the statements IF OBJECT_ID('tempdb..##replay_info') IS NOT NULL DROP TABLE ##replay_info; CREATE TABLE ##replay_info ( trace_id int, statement_id int, statement_text nvarchar(max) ); --save the trace id in the global temp table INSERT INTO ##replay_info (trace_id) VALUES(@TraceID); END TRY BEGIN CATCH --cleanup the trace IF EXISTS( SELECT 1 FROM sys.traces WHERE id = @TraceId AND status = 1 ) EXEC sp_trace_setstatus @TraceID, 0; IF EXISTS( SELECT 1 FROM sys.traces WHERE id = @TraceId AND status = 0 ) EXEC sp_trace_setstatus @TraceID, 2; IF OBJECT_ID('tempdb..##replay_info') IS NOT NULL DROP TABLE ##replay_info; SELECT @ErrorMessage = ERROR_MESSAGE(), @ErrorSeverity = ERROR_SEVERITY(), @ErrorState = ERROR_STATE(); RAISERROR(@ErrorMessage, @ErrorSeverity, @ErrorState); END CATCH END -- *********************************************** -- -- * STOP RECORDING * -- -- *********************************************** -- IF @action = 'STOPRECORD' BEGIN BEGIN TRY -- gather the trace id SELECT @TraceID = trace_id FROM ##replay_info; IF @TraceId IS NULL RAISERROR('No data has been recorded!',16,1) DELETE FROM ##replay_info; -- identify the trace file SELECT TOP(1) @TraceFile = path FROM sys.traces WHERE path like '%REPLAY[_]______________.trc' ORDER BY id DESC -- populate the global temporary table with -- the statements recorded in the INSERT INTO ##replay_info SELECT @TraceID, ROW_NUMBER() OVER(ORDER BY (SELECT NULL)), TextData FROM fn_trace_gettable(@traceFile, DEFAULT) WHERE TextData IS NOT NULL; --stop and deltete the trace IF EXISTS( SELECT 1 FROM sys.traces WHERE id = @TraceId AND status = 1 ) EXEC sp_trace_setstatus @TraceID, 0; IF EXISTS( SELECT 1 FROM sys.traces WHERE id = @TraceId AND status = 0 ) EXEC sp_trace_setstatus @TraceID, 2; END TRY BEGIN CATCH --stop and deltete the trace IF EXISTS( SELECT 1 FROM sys.traces WHERE id = @TraceId AND status = 1 ) EXEC sp_trace_setstatus @TraceID, 0; IF EXISTS( SELECT 1 FROM sys.traces WHERE id = @TraceId AND status = 0 ) EXEC sp_trace_setstatus @TraceID, 2; SELECT @ErrorMessage = ERROR_MESSAGE(), @ErrorSeverity = ERROR_SEVERITY(), @ErrorState = ERROR_STATE(); RAISERROR(@ErrorMessage, @ErrorSeverity, @ErrorState); END CATCH END -- *********************************************** -- -- * SHOW COLLECTED QUERIES * -- -- *********************************************** -- IF @action = 'SHOWQUERY' BEGIN BEGIN TRY IF OBJECT_ID('tempdb..##replay_info') IS NULL RAISERROR('No data has been recorded yet',16,1); SET @sql = 'SELECT statement_id, statement_text FROM ##replay_info '; IF @start_statement_id IS NOT NULL AND @end_statement_id IS NULL SET @sql = @sql + ' WHERE statement_id = @start_statement_id '; IF @start_statement_id IS NOT NULL AND @end_statement_id IS NOT NULL SET @sql = @sql + ' WHERE statement_id BETWEEN @start_statement_id AND @end_statement_id'; EXEC sp_executesql @sql ,N'@start_statement_id int, @end_statement_id int' ,@start_statement_id ,@end_statement_id; END TRY BEGIN CATCH SELECT @ErrorMessage = ERROR_MESSAGE(), @ErrorSeverity = ERROR_SEVERITY(), @ErrorState = ERROR_STATE(); RAISERROR(@ErrorMessage, @ErrorSeverity, @ErrorState); END CATCH END -- *********************************************** -- -- * REPLAY * -- -- *********************************************** -- IF @action = 'REPLAY' BEGIN BEGIN TRY --load the selected statement(s) SET @statement = ' SET @sql = '''' SELECT @sql += statement_text + '' '' FROM ##replay_info '; IF @start_statement_id IS NOT NULL AND @end_statement_id IS NULL SET @statement = @statement + ' WHERE statement_id = @start_statement_id '; IF @start_statement_id IS NOT NULL AND @end_statement_id IS NOT NULL SET @statement = @statement + ' WHERE statement_id BETWEEN @start_statement_id AND @end_statement_id'; EXEC sp_executesql @statement ,N'@start_statement_id int, @end_statement_id int, @sql nvarchar(max) OUTPUT' ,@start_statement_id ,@end_statement_id ,@sql OUTPUT; IF NULLIF(LTRIM(@sql),'') IS NULL RAISERROR('Unable to locate the statement(s) specified.',16,1) SET @srv = @@SERVERNAME; -- gather this server name IF EXISTS (SELECT * FROM sys.servers WHERE name = 'TMPLOOPBACK') EXEC sp_dropserver 'TMPLOOPBACK'; -- Create a loopback linked server EXEC master.dbo.sp_addlinkedserver @server = N'TMPLOOPBACK', @srvproduct = N'SQLServ', -- it’s not a typo: it can’t be “SQLServer” @provider = N'SQLNCLI', -- change to SQLOLEDB for SQLServer 2000 @datasrc = @srv; -- Set the authentication to "current security context" EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname = N'TMPLOOPBACK', @useself = N'True', @locallogin = NULL, @rmtuser = NULL, @rmtpassword = NULL; -- Use a permanent table in Tempdb to store the output IF OBJECT_ID('tempdb..___outputTable') IS NOT NULL DROP TABLE tempdb..___outputTable; -- Execute the statement in Tempdb to discover the column definition SET @statement = ' SELECT TOP(0) * INTO tempdb..___outputTable FROM OPENQUERY(TMPLOOPBACK,'' SET FMTONLY OFF; EXEC tempdb.sys.sp_executesql N''''' + REPLACE(@sql,'''','''''''''') + ''''' '') '; EXEC(@statement); SET @statement = @sql; -- Build the column list of the output table SET @column_list = STUFF(( SELECT ',' + QUOTENAME(C.name) FROM tempdb.sys.columns AS C INNER JOIN tempdb.sys.tables AS T ON C.object_id = T.object_id WHERE T.name = '___outputTable' FOR XML PATH('') ),1,1,SPACE(0)); -- Add a "Database Name" column ALTER TABLE tempdb..___outputTable ADD Database__Name sysname; -- Build a sql statement to execute -- the recorded statement against all databases SET @sql = 'N''INSERT tempdb..___outputTable(' + @column_list + ') EXEC(@statement); UPDATE tempdb..___outputTable SET Database__Name = DB_NAME() WHERE Database__Name IS NULL;'''; -- Build a statement to execute on each database context ;WITH dbs AS ( SELECT *, system_db = CASE WHEN name IN ('master','model','msdb','tempdb') THEN 1 ELSE 0 END FROM sys.databases WHERE DATABASEPROPERTY(name, 'IsSingleUser') = 0 AND HAS_DBACCESS(name) = 1 AND state_desc = 'ONLINE' ) SELECT @sql = ( SELECT 'EXEC ' + QUOTENAME(name) + '.sys.sp_executesql ' + @sql + ',' + 'N''@statement nvarchar(max)'',' + '@statement;' + char(10) AS [text()] FROM dbs ORDER BY name FOR XML PATH('') ); -- Execute multi-db sql and pass in the actual statement EXEC sp_executeSQL @sql, N'@statement nvarchar(max)', @statement -- SET @sql = ' SELECT Database__Name AS [Database Name], ' + @column_list + ' FROM tempdb..___outputTable ORDER BY 1; ' EXEC sp_executesql @sql; EXEC tempdb.sys.sp_executesql N'DROP TABLE ___outputTable'; END TRY BEGIN CATCH SELECT @ErrorMessage = ERROR_MESSAGE(), @ErrorSeverity = ERROR_SEVERITY(), @ErrorState = ERROR_STATE(); RAISERROR(@ErrorMessage, @ErrorSeverity, @ErrorState); END CATCH END END
As you can see, the code creates a stored procedure that accepts a parameter named @action, which is used to determine what the procedure should do. Specialized sections of the procedure handle every possible value for the parameter, with the following logic:
First of all you start recording, then you execute the statements to repeat on each database, then you stop recording. From that moment on, you can enumerate the statements captured and execute them, passing a specific statement id or a range of ids.
The typical use of the procedure could look like this:
-- start recording EXECUTE replay_statements_on_each_db @action = 'RECORD' -- run the statements you want to replay SELECT DATABASEPROPERTYEX(DB_NAME(),'Recovery') AS RecoveryModel -- stop recording EXECUTE replay_statements_on_each_db @action = 'STOPRECORD' -- display captured statements EXECUTE replay_statements_on_each_db @action = 'SHOWQUERY' -- execute the first statement EXECUTE replay_statements_on_each_db @action = 'REPLAY', @start_statement_id = 1, @end_statement_id = 1
You can see the results of the script execution here:
Obviuosly this approach is totally overkill for just selecting the database recovery model, but it can become very handy when the statement’s complexity raises.
This seems a perfect fit for Glen Berry’s diagnostic queries, which is where we started from. You can go back to that script and add the record instructions just before the database specific queries start:
At the end of the script you can add the instructions to stop recording and show the queries captured by the procedure.
Once the statements are recorded, you can run any of the statements against all databases. For instance, I decided to run the top active writes index query (query 51).
As expected, the procedure adds the database name column to the result set and then displays the merged results.
You may have noticed that I skipped the first statement in the database-specific section of the script, which is a DBCC command. Unfortunately, not all kind of statement can be captured with this procedure, because some limitations apply. Besides the inability to capture some DBCC commands, please note that the column names must be explicitly set.
I think that a CLR procedure could overcome these limitations, or at least some of them. I hope I will find the time to try the CLR method soon and I promise I will blog the results.
There’s a lot of code on that page and I thought that making it available for download would make it easier to play with.
You can download the code from this page or from the Code Repository.
I hope you enjoy reading the article as much as I enjoyed writing it.
One byte at a time, obviously!
Sometimes, when you have to optimize a poor performing query, you may find yourself staring at a huge statement, wondering where to start.
Some developers think that a single elephant statement is better than multiple small statements, but this is not always the case.
Let’s try to look from the perspective of software quality:
The optimizer will likely come up with a suboptimal plan, giving up early on optimizations and transformations.
Any slight change in statistics could lead the optimizer to produce a different and less efficient plan.
A single huge statement is less readable and maintainable than multiple small statements.
With those points in mind, the only sensible thing to do is cut the elephant into smaller pieces and eat them one at a time.
This is how I do it:
- Lay out the original code and read the statement carefully
- Decide whether a full rewrite is more convenient
- Set up a test environment
- Identify the query parts
- Identify the main tables
- Identify non correlated subqueries and UNIONs
- Identify correlated subqueries
- Write a query outline
- Break the statement into parts with CTEs, views, functions and temporary tables
- Merge redundant subqueries
- Put it all together
- Verify the output based on multiple different input values
- Comment your work thoroughly
1. Lay out the original code and read the statement carefully
Use one of the many SQL formatters you can find online. My favorite one is Tao Klerk’s Poor Man’s T-SQL Formatter: it’s very easy to use and configure and it comes with a handy SSMS add-in and plugins for Notepad++ and WinMerge. Moreover, it’s free and open source. A must-have.
Once your code is readable, don’t rush to the keyboard: take your time and read it carefully.
- Do you understand (more or less) what it is supposed to do?
- Do you think you could have coded it yourself?
- Do you know all the T-SQL constructs it contains?
If you answered “yes” to all the above, you’re ready to go to the next step.
2. Decide whether a full rewrite is more convenient
OK, that code sucks and you have to do something. It’s time to make a decision:
- Take the business rules behind the statement and rewrite it from scratch
When the statement is too complicated and unreadable, it might be less time-consuming to throw the old statement away and write your own version.
Usually it is quite easy when you know exactly what the code is supposed to do. Just make sure you’re not making wrong assumptions and be prepared to compare your query with the original one many times.
- Refactor the statement
When the business rules are unclear (or unknown) starting from scratch is not an option. No, don’t laugh! The business logic may have been buried in the sands of time or simply you may be working on a query without any will to understand the business processes behind it.
Bring a big knife: you’re going to cut the elephant in pieces.
- Leave the statement unchanged
Sometimes the statement is too big or too complicated to bother taking the time to rewrite it. For instance, this query would take months to rewrite manually.
It works? Great: leave it alone.
3. Set up a test environment
It doesn’t matter how you decide to do it: at the end of the day you will have to compare the results of your rewritten query with the results of the “elephant” and make sure you did not introduce errors in your code.
The best way to do this is to prepare a script that compares the results of the original query with the results of your rewritten version. This is the script I am using (you will find it in the code repository, as usual).
-- ============================================= -- Author: Gianluca Sartori - spaghettidba -- Create date: 2012-03-14 -- Description: Runs two T-SQL statements and -- compares the results -- ============================================= -- Drop temporary tables IF OBJECT_ID('tempdb..#original') IS NOT NULL DROP TABLE #original; IF OBJECT_ID('tempdb..#rewritten') IS NOT NULL DROP TABLE #rewritten; -- Store the results of the original -- query into a temporary table WITH original AS ( <original, text, > ) SELECT * INTO #original FROM original; -- Add a sort column ALTER TABLE #original ADD [______sortcolumn] int identity(1,1); -- Store the results of the rewritten -- query into a temporary table WITH rewritten AS ( <rewritten, text, > ) SELECT * INTO #rewritten FROM rewritten; -- Add a sort column ALTER TABLE #rewritten ADD [______sortcolumn] int identity(1,1); -- Compare the results SELECT 'original' AS source, * FROM ( SELECT * FROM #original EXCEPT SELECT * FROM #rewritten ) AS A UNION ALL SELECT 'rewritten' AS source, * FROM ( SELECT * FROM #rewritten EXCEPT SELECT * FROM #original ) AS B;
The script is a SSMS query template that takes the results of the original and the rewritten query and compares the resultsets, returning all the missing or different rows. The script uses two CTEs to wrap the two queries: this means that the ORDER BY predicate (if any) will have to be moved outside the CTE.
Also, the results of the two queries are piped to temporary tables, which means that you can’t have duplicate column names in the result set.
Another thing worth noting is that the statements to compare cannot be stored procedures. One simple way to overcome this limitation is to use the technique I described in this post.
The queries inside the CTEs should then be rewritten as:
SELECT * FROM OPENQUERY(LOOPBACK,'<original, text,>')
Obviously, all the quotes must be doubled, which is the reason why I didn’t set up the script this way in the first place. It’s annoying, but it’s the only way I know of to pipe the output of a stored procedure into a temporary table without knowing the resultset definition in advance. If you can do better, suggestions are always welcome.
4. Identify the query parts
OK, now you have everything ready and you can start eating the elephant. The first thing to do is to identify all the autonomous blocks in the query and give them a name. You can do this at any granularity and repeat the task as many times as you like: the important thing is that at the end of this process you have a list of query parts and a name for each part.
Identify the main tables
Usually I like the idea that the data comes from one “main” table and all the rest comes from correlated tables. For instance, if I have to return a resultset containing some columns from the “SalesOrderHeader” table and some columns from the “SalesOrderDetail” table, I consider SalesOrderHeader the main table and SalesOrderHeader a correlated table. It fits well with my mindset, but you are free to see things the way you prefer.
Probably these tables are already identified by an alias: note down the aliases and move on.
Identify non correlated subqueries and UNIONs
Non-correlated subqueries are considered as inline views. Often these subqueries are joined to the main tables to enrich the resultset with additional columns.
Don’t be scared away by huge subqueries: you can always repeat all the steps for any single subquery and rewrite it to be more compact and readable.
Again, just note down the aliases and move to the next step.
Identify correlated subqueries
Correlated subqueries are not different from non-correlated subqueries, with the exception that you will have less freedom to move them from their current position in the query. However, that difference doesn’t matter for the moment: give them a name and note it down.
5. Write a query outline
Use the names you identified in the previous step and write a query outline. It won’t execute, but it gives you the big picture.
If you really want the big picture, print the query. It may seem crazy, but sometimes I find it useful to be able to see the query as a whole, with all the parts with their names highlighted in different colors.
Yes, that’s a single SELECT statement, printed in Courier new 8 pt. on 9 letter sheets, hanging on the wall in my office.
6. Break the statement in parts with CTEs, views, functions and temporary tables
SQL Server offers a fair amount of tools that allow breaking a single statement into parts:
- Common Table Expressions
- Inline Table Valued Functions
- Multi-Statement Table Valued Functions
- Stored procedures
- Temporary Tables
- Table Variables
Ideally, you will choose the one that performs best in your scenario, but you could also take usability and modularity into account.
CTEs and subqueries are a good choice when the statement they contain is not used elsewhere and there is no need to reuse that code.
Table Valued functions and views, on the contrary, are most suitable when there is an actual need to incapsulate the code in modules to be reused in multiple places.
Generally speaking, you will use temporary tables or table variables when the subquery gets used more than once in the statement, thus reducing the load.
Though I would really like to go into deeper details on the performance pros and cons of each construct, that would take an insane amount of time and space. You can find a number of articles and blogs on those topics and I will refrain from echoing them here.
7. Merge redundant subqueries
Some parts of your query may be redundant and you may have the opportunity to merge those parts. The merged query will be more compact and will likely perform significantly better.
For instance, you could have multiple subqueries that perform aggregate calculations on the same row set:
SELECT ProductID ,Name ,AverageSellOutPrice = ( SELECT AVG(UnitPrice) FROM Sales.SalesOrderDetail WHERE ProductID = PR.ProductID ) ,MinimumSellOutPrice = ( SELECT MIN(UnitPrice) FROM Sales.SalesOrderDetail WHERE ProductID = PR.ProductID ) ,MaximumSellOutPrice = ( SELECT MAX(UnitPrice) FROM Sales.SalesOrderDetail WHERE ProductID = PR.ProductID ) FROM Production.Product AS PR;
The above query can be rewritten easily to avoid hitting the SalesOrderDetail table multiple times:
SELECT ProductID ,Name ,AverageSellOutPrice ,MinimumSellOutPrice ,MaximumSellOutPrice FROM Production.Product AS PR CROSS APPLY ( SELECT AVG(UnitPrice), MIN(UnitPrice), MAX(UnitPrice) FROM Sales.SalesOrderDetail WHERE ProductID = PR.ProductID ) AS SellOuPrices (AverageSellOutPrice, MinimumSellOutPrice, MaximumSellOutPrice);
Another typical situation where you can merge some parts is when multiple subqueries perform counts on slightly different row sets:
SELECT ProductID ,Name ,OnlineOrders = ( SELECT COUNT(*) FROM Sales.SalesOrderHeader AS SOH WHERE SOH.OnlineOrderFlag = 1 AND EXISTS ( SELECT * FROM Sales.SalesOrderDetail WHERE SalesOrderID = SOH.SalesOrderID AND ProductID = PR.ProductID ) ) ,OfflineOrders = ( SELECT COUNT(*) FROM Sales.SalesOrderHeader AS SOH WHERE SOH.OnlineOrderFlag = 0 AND EXISTS ( SELECT * FROM Sales.SalesOrderDetail WHERE SalesOrderID = SOH.SalesOrderID AND ProductID = PR.ProductID ) ) FROM Production.Product AS PR;
The only difference between the two subqueries is the predicate on SOH.OnlineOrderFlag. The two queries can be merged introducing a CASE expression in the aggregate:
SELECT ProductID ,Name ,ISNULL(OnlineOrders,0) AS OnlineOrders ,ISNULL(OfflineOrders,0) AS OfflineOrders FROM Production.Product AS PR CROSS APPLY ( SELECT SUM(CASE WHEN SOH.OnlineOrderFlag = 1 THEN 1 ELSE 0 END), SUM(CASE WHEN SOH.OnlineOrderFlag = 0 THEN 1 ELSE 0 END) FROM Sales.SalesOrderHeader AS SOH WHERE EXISTS ( SELECT * FROM Sales.SalesOrderDetail WHERE SalesOrderID = SOH.SalesOrderID AND ProductID = PR.ProductID ) ) AS Orderscount (OnlineOrders, OfflineOrders);
There are infinite possibilities and enumerating them all would be far beyond the scope of this post. This is one of the topics that my students often find hard to understand and I realize that it really takes some experience to identify merge opportunities and implement them.
8. Put it all together
Remember the query outline you wrote previously? It’s time to put it into action.
Some of the identifiers may have gone away in the merge process, some others are still there and have been transformed into different SQL constructs, such as CTEs, iTVFs or temporary tables.
9. Verify the output based on multiple different input values
Now it’s time to see if your new query works exactly like the original one. You already have a script for that: you can go on and use it.
Remember that the test can be considered meaningful only if you repeat it a reasonably large number of times, with different parameters. Some queries could appear to be identical, but still be semantically different. Make sure the rewritten version handles NULLs and out-of-range parameters in the same way.
10.Comment your work thoroughly
If you don’t comment your work, somebody will find it even more difficult to maintain than the elephant you found when you started.
Comments are for free and don’t affect the query performance in any way. Don’t add comments that mimic what the query does, instead, write a meaningful description of the output of the query.
For instance, given a code fragment like this:
SELECT SalesOrderID, OrderDate, ProductID INTO #orders FROM Sales.SalesOrderHeader AS H INNER JOIN Sales.SalesOrderDetail AS D ON H.SalesOrderID = D.SalesOrderID WHERE OrderDate BETWEEN @StartDate AND @EndDate
a comment like “joins OrderHeader to OrderDetail” adds nothing to the clarity of the code. A comment like “Selects the orders placed between the @StartDate and @EndDate and saves the results in a temporary table for later use” would be a much better choice.
Elephant eaten. (Burp!)
After all, it was not too big, was it?