Blog Archives

Database Free Space Monitoring – The right way


Lately I spent some time evaluating some monitoring tools for SQL Server and one thing that struck me very negatively is how none of them (to date) has been reporting database free space correctly.
I was actively evaluating one of those tools when one of my production databases ran out of space without any sort of warning.
I was so upset that I decided to code my own monitoring script.

Some things to take into account:

  • Hard set limits for file growth have to be considered: a drive with lots of space is useless if the database file cannot grow and take it.
  • If fixed growth is used, there must be enough space in the drive to accomodate the growth amount you set.
  • If percent growth is used, you have to calculate recursively how much your database file will grow before taking all the space in the drive
  • Some scripts found in blogs and books don’t account for mount points. Use sys.dm_os_volume_stats to include mount points in your calculation (unless you’re running SQL Server versions prior to 2012).
  • Database free space alone is not enough. NTFS performance start degrading when the drive free space drops below 20%. Make sure you’re monitoring that as well.
  • 20% of a huge database can be lots of space. You can change that threshold to whatever you find appropriate (for instance, less than 20% AND less than 20 GB)

That said, here is my script, I hope you find it useful.


-- create a temporary table to hold data from sys.master_files
IF OBJECT_ID('tempdb..#masterfiles') IS NOT NULL
    DROP TABLE #masterfiles;

CREATE TABLE #masterfiles (
    database_id int,
    type_desc varchar(10),
    name sysname,
    physical_name varchar(255),
    size_mb int,
    max_size_mb int,
    growth int,
    is_percent_growth bit,
    data_space_id int,
    data_space_name nvarchar(128) NULL,
    drive nvarchar(512),
    mbfree int
);

-- extract file information from sys.master_files
-- and correlate each file to its logical volume
INSERT INTO #masterfiles
SELECT
     mf.database_id
    ,type_desc
    ,name
    ,physical_name
    ,size_mb = size / 128
    ,max_size_mb =
        CASE
            WHEN max_size = 268435456 AND type_desc = 'LOG' THEN -1
            ELSE
                CASE
                    WHEN max_size = -1 THEN -1
                    ELSE max_size / 128
                END
        END
    ,mf.growth
    ,mf.is_percent_growth
    ,mf.data_space_id
    ,NULL
    ,d.volume_mount_point
    ,d.available_bytes / 1024 / 1024
FROM sys.master_files AS mf
CROSS APPLY sys.dm_os_volume_stats(database_id, file_id) AS d;

-- add an "emptyspace" column to hold empty space for each file
ALTER TABLE #masterfiles ADD emptyspace_mb int NULL;

-- iterate through all databases to calculate empty space for its files
DECLARE @name sysname;

DECLARE c CURSOR FORWARD_ONLY READ_ONLY STATIC LOCAL
FOR
SELECT name
FROM sys.databases
WHERE state_desc = 'ONLINE'

OPEN c
FETCH NEXT FROM c INTO @name

WHILE @@FETCH_STATUS = 0
BEGIN

    DECLARE @sql nvarchar(max)
    DECLARE @statement nvarchar(max)
    SET @sql = '
        UPDATE mf
        SET emptyspace_mb = size_mb - FILEPROPERTY(name,''SpaceUsed'') / 128,
            data_space_name =
                ISNULL(
                    (SELECT name FROM sys.data_spaces WHERE data_space_id = mf.data_space_id),
                    ''LOG''
                )
        FROM #masterfiles AS mf
        WHERE database_id = DB_ID();
    '
    SET @statement = 'EXEC ' + QUOTENAME(@name) + '.sys.sp_executesql @sql'
    EXEC sp_executesql @statement, N'@sql nvarchar(max)', @sql

    FETCH NEXT FROM c INTO @name
END

CLOSE c
DEALLOCATE c

-- create a scalar function to simulate the growth of the database in the drive's available space
IF OBJECT_ID('tempdb..calculateAvailableSpace') IS NOT NULL
    EXEC tempdb.sys.sp_executesql N'DROP FUNCTION calculateAvailableSpace'

EXEC tempdb.sys.sp_executesql N'
CREATE FUNCTION calculateAvailableSpace(
    @diskFreeSpaceMB float,
    @currentSizeMB float,
    @growth float,
    @is_percent_growth bit
)
RETURNS int
AS
BEGIN
    IF @currentSizeMB = 0
        SET @currentSizeMB = 1
    DECLARE @returnValue int = 0
    IF @is_percent_growth = 0
    BEGIN
        SET @returnValue = (@growth /128) * CAST((@diskFreeSpaceMB / (@growth / 128)) AS int)
    END
    ELSE
    BEGIN
        DECLARE @prevsize AS float = 0
        DECLARE @calcsize AS float = @currentSizeMB
        WHILE @calcsize < @diskFreeSpaceMB
        BEGIN
            SET @prevsize = @calcsize
            SET @calcsize = @calcsize + @calcsize * @growth / 100.0
        END
        SET @returnValue = @prevsize - @currentSizeMB
        IF @returnValue < 0
            SET @returnValue = 0
    END

    RETURN @returnValue
END
'

-- report database filegroups with less than 20% available space
;WITH masterfiles AS (
    SELECT *
        ,available_space =
            CASE mf.max_size_mb
                WHEN -1 THEN tempdb.dbo.calculateAvailableSpace(mbfree, size_mb, growth, is_percent_growth)
                ELSE max_size_mb - size_mb
            END
            + emptyspace_mb
    FROM #masterfiles AS mf
),
spaces AS (
    SELECT
         DB_NAME(database_id) AS database_name
        ,data_space_name
        ,type_desc
        ,SUM(size_mb) AS size_mb
        ,SUM(available_space) AS available_space_mb
        ,SUM(available_space) * 100 /
            CASE SUM(size_mb)
                WHEN 0 THEN 1
                ELSE SUM(size_mb)
            END AS available_space_percent
    FROM masterfiles
    GROUP BY DB_NAME(database_id)
        ,data_space_name
        ,type_desc
)
SELECT *
FROM spaces
WHERE available_space_percent < 20
ORDER BY available_space_percent ASC

IF OBJECT_ID('tempdb..#masterfiles') IS NOT NULL
    DROP TABLE #masterfiles;

IF OBJECT_ID('tempdb..calculateAvailableSpace') IS NOT NULL
    EXEC tempdb.sys.sp_executesql N'DROP FUNCTION calculateAvailableSpace'

I am sure that there are smarter scripts around that calculate it correctly and I am also sure that there are other ways to obtain the same results (PowerShell, to name one). The important thing is that your script takes every important aspect into account and warns you immediately when the database space drops below your threshold, not when the available space is over.

Last time it happened to me it was a late saturday night and, while I really love my job, I can come up with many better ways to spend my saturday night.

I’m pretty sure you do as well.

Announcing ExtendedTSQLCollector


I haven’t been blogging much lately, actually I haven’t been blogging at all in the last 4 months. The reason behind is I have been putting all my efforts in a new project I started recently, which absorbed all my attention and spare time.

I am proud to announce that my project is now live and available to everyone for download.

1397849640_69471The project name is ExtendedTSQLCollector and you can find it at http://extendedtsqlcollector.codeplex.com. As you may have already guessed, it’s a bridge between two technologies that were not meant to work together, that could instead bring great advantages when combined: Extended Events and Data Collector.

ExtendedTSQLCollector is a set of two Collector Types built to overcome some of the limitations found in the built-in collector types and extend their functionality to include the ability to collect data from XE sessions.

The first Collector Type is the “Extended T-SQL Query” collector type, which was my initial goal when I started the project. If you have had the chance to play with the built-in “Generic T-SQL Query” collector type, you may have noticed that not all datatypes are supported. For instance, it’s impossible to collect data from XML or varchar(max) columns. This is due to the intermediate format used by this collector type: the SSIS raw files.

The “Extended T-SQL Query” collector type uses a different intermediate format, which allows collecting data of any data type. This is particularly useful, because SQL Server exposes lots of information in XML format (just think of the execution plans!) and you no longer need to code custom SSIS packages to collect that data.

The second Collector Type is the “Extended XE Reader” collector type, which takes advantage of the Extended Events streaming APIs to collect data from an Extended Events session, without the need to specify additional targets such as .xel files or ring buffers. This means no file system bloat due to .xel rollover files and no memory consumption for additional ring buffers: all the events are read directly from the session and processed in near real-time.

In addition to the filter predicates defined in the XE session, you can add more filter predicates on the data to collect and upload to the MDW and decide which columns (fields and actions) to collect. The collector will take care of creating the target table in your MDW database and upload all the data that satisfies the filter predicates.

The near real-time behavior of this collector type allowed me to include an additional feature to the mix: the ability to fire alerts in response to Extended Events. The current release (1.5) allows firing email alerts when the events are captured, with additional filter predicates and the ability to include event fields and actions in the email body. You can find more information on XE alerts in the documentation.

Here is an example of the email alerts generated by the XEReader collector type for the blocked_process event:

email

 

Another part of the project is the CollectionSet Manager, a GUI to install the collector types to the target servers and configure collection sets and collection items. I think that one of the reasons why the Data Collector is very underutilized by DBAs is the lack of a Graphical UI. Besides the features specific to the ExtendedTSQLCollector, such as installing the collector type, this small utility aims at providing the features missing in the SSMS Data Collector UI. This part of the project is still at an early stage, but I am planning to release it in the next few months.

My journey through the ins and outs of the Data Collector allowed me to understand deeply how it works and how to set it up and troubleshoot it. Now I am planning to start a blog series on this topic, from the basics to the advanced features. Stay tuned :-)

I don’t want to go into deep details on the setup and configuration of this small project: I just wanted to ignite your curiosity and make you rush to codeplex to download your copy of ExtendedTSQLCollector.

What are you waiting for?

Non-unique indexes that COULD be unique


In my last post I showed a query to identify non-unique indexes that should be unique.

You maybe have some other indexes that could be unique based on the data they contain, but are not.

To find out, you just need to query each of those indexes and group by the whole key, filtering out those that have duplicate values. It may look like an overwhelming amount of work, but the good news is I have a script for that:

DECLARE @sql nvarchar(max);

WITH indexes AS (
    SELECT
         QUOTENAME(OBJECT_SCHEMA_NAME(uq.object_id)) AS [schema_name]
        ,QUOTENAME(OBJECT_NAME(uq.object_id)) AS table_name
        ,uq.name AS index_name
        ,cols.name AS cols
    FROM sys.indexes AS uq
    CROSS APPLY (
        SELECT STUFF((
            SELECT ',' + QUOTENAME(sc.name) AS [text()]
            FROM sys.index_columns AS uc
            INNER JOIN sys.columns AS sc
                ON  uc.column_id = sc.column_id
                AND uc.object_id = sc.object_id
            WHERE uc.object_id = uq.object_id
                AND uc.index_id = uq.index_id
                AND uc.is_included_column = 0
            FOR XML PATH('')
        ),1,1,SPACE(0))
    ) AS cols (name)
    WHERE is_unique = 0
        AND has_filter = 0
        AND is_hypothetical = 0
        AND type IN (1,2)
        AND object_id IN (
            SELECT object_id
            FROM sys.objects
            WHERE is_ms_shipped = 0
            AND type = 'U'
        )
)
-- Build a big statement to query index data
SELECT @sql = (
    SELECT
        'SELECT ''' + [schema_name] + ''' AS [schema_name],
            ''' + table_name + ''' AS table_name,
            ''' + index_name + ''' AS index_name,
            can_be_unique =
                CASE WHEN (
                    SELECT COUNT(*)
                    FROM (
                        SELECT ' + cols + ',COUNT(*) AS cnt
                        FROM ' + [schema_name] + '.' + [table_name] + '
                        GROUP BY ' + cols + '
                        HAVING COUNT(*) > 1
                    ) AS data
                    ) > 0
                THEN 0
                ELSE 1
                END;'
    FROM indexes
    FOR XML PATH(''), TYPE
).value('.','nvarchar(max)');

-- prepare a table to receive results
DECLARE @results TABLE (
    [schema_name] sysname,
    [table_name] sysname,
    [index_name] sysname,
    [can_be_unique] bit
)

-- execute the script and pipe the results
INSERT @results
EXEC(@sql)

-- show candidate unique indexes
SELECT *
FROM @results
WHERE can_be_unique = 1
ORDER BY [schema_name], [table_name], [index_name]

The script should complete quite quickly, since you have convenient indexes in place. However, I suggest that you run it against a non production copy of your database, as it will scan all non unique indexes found in the database.

The results will include all the indexes that don’t contain duplicate data. Whether you should make those indexes UNIQUE, only you can tell.

Some indexes may contain unique data unintentionally, but could definitely store duplicate data in the future. If you know your data domain, you will be able to spot the difference.

Non-unique indexes that should be unique


Defining the appropriate primary key and unique constraints is fundamental for a good database design.

unique

One thing that I often see overlooked is that all the indexes with a key that includes completely another UNIQUE index’s key should in turn be created as UNIQUE. You could argue that such an index has probably been created by mistake, but it’s not always the case.

If you want to check your database for indexes that can be safely made UNIQUE, you can use the following script:

SELECT OBJECT_SCHEMA_NAME(uq.object_id) AS [schema_name],
    OBJECT_NAME(uq.object_id) AS table_name,
    uq.name AS unique_index_name,
    nui.name AS non_unique_index_name
FROM sys.indexes AS uq
CROSS APPLY (
    SELECT name, object_id, index_id
    FROM sys.indexes AS nui
    WHERE nui.object_id = uq.object_id
        AND nui.index_id <> uq.index_id
        AND nui.is_unique = 0
        AND nui.has_filter = 0
        AND nui.is_hypothetical = 0
) AS nui
WHERE is_unique = 1
    AND has_filter = 0
    AND is_hypothetical = 0
    AND uq.object_id IN (
        SELECT object_id
        FROM sys.tables
    )
    AND NOT EXISTS (

        SELECT column_id
        FROM sys.index_columns AS uc
        WHERE uc.object_id = uq.object_id
            AND uc.index_id = uq.index_id
            AND uc.is_included_column = 0

        EXCEPT

        SELECT column_id
        FROM sys.index_columns AS nuic
        WHERE nuic.object_id = nui.object_id
            AND nuic.index_id = nui.index_id
            AND nuic.is_included_column = 0

    )
ORDER BY [schema_name], table_name, unique_index_name

You may wonder why you should bother making those indexes UNIQUE.

The answer is that constraints help the optimizer building better execution plans. Marking an index as UNIQUE tells the optimizer that one and just one row can be found for each key value: it’s a valuable information that can actually help estimating the correct cardinality.

Does the script return any rows? Make those indexes UNIQUE, you’ll thank me later.

SQL Server Agent in Express Edition


As you probably know, SQL Server Express doesn’t ship with SQL Server Agent.

This is a known limitation and many people offered alternative solutions to schedule jobs, including windows scheduler, free and commercial third-party applications.

My favourite SQL Server Agent replacement to date is Denny Cherry‘s Standalone SQL Agent, for two reasons:

  1. It uses msdb tables to read job information.
    This means that jobs, schedules and the like can be scripted using the same script you would use in the other editions.
  2. It’s open source and it was started by a person I highly respect.

However, while I still find it a great piece of software, there are a couple of downsides to take into account:

  1. It’s still a beta version and the project hasn’t been very active lately.
  2. There’s no GUI tool to edit jobs or monitor job progress.
  3. It fails to install when UAC is turned on
  4. It’s not 100% compatible with SQL Server 2012
  5. It doesn’t restart automatically when the SQL Server instance starts
  6. It requires sysadmin privileges

The UAC problem during installation is easy to solve: open an elevated command prompt and run the installer msi. Easy peasy.

As far as SQL Server 2012 is concerned, the service fails to start when connected to a 2012 instance. In the ERRORLOG file (the one you find in the Standalone SQL Agent directory, not SQL Server’s) you’ll quickly find the reason of the failure: it can’t create the stored procedure sp_help_job_SSA. I don’t know why this happens: I copied the definition of the stored procedure from a 2008 instance and it worked fine.

If you don’t have a SQL Server 2008 instance available, you can extract the definition of the stored procedure from the source code at CodePlex.

Issue 5) is a bit more tricky to tackle. When the service loses the connection to the target SQL Server instance, it won’t restart automatically and it will remain idle until you cycle the service manually. In the ERRORLOG file you’ll find a message that resembles to this:

Error connecting to SQL Instance.
No connection attempt will be made until Sevice is restarted.

You can overcome this limitation using a startup stored procedure that restarts the service:

USE master
GO

EXEC sp_configure 'advanced',1
RECONFIGURE WITH OVERRIDE

EXEC sp_configure 'xp_cmdshell',1
RECONFIGURE WITH OVERRIDE
GO

USE master
GO

CREATE PROCEDURE startStandaloneSQLAgent
AS
BEGIN
    SET NOCOUNT ON;

    EXEC xp_cmdshell 'net stop "Standalone SQL Agent"'
    EXEC xp_cmdshell 'net start "Standalone SQL Agent"'

END
GO

EXEC sp_procoption @ProcName = 'startStandaloneSQLAgent'
      , @OptionName = 'startup'
      , @OptionValue = 'on';
 GO

However, you’ll probably notice that the SQL Server service account does not have sufficient rights to restart the service.

The following PowerShell script grants the SQL Server service account all the rights it needs. In order to run it, you need to download the code available at Rohn Edwards’ blog.

# Change to the display name of your SQL Server Express service
$service = Get-WmiObject win32_service |
	where-object { $_.DisplayName -eq "SQL Server (SQLEXPRESS2008R2)" }

$serviceLogonAccount = $service.StartName

$ServiceAcl = Get-ServiceAcl "Standalone SQL Agent"
$ServiceAcl.Access

# Add an ACE allowing the service user Start and Stop service rights:
$ServiceAcl.AddAccessRule((New-AccessControlEntry -ServiceRights "Start,Stop" -Principal $serviceLogonAccount))

# Apply the modified ACL object to the service:
$ServiceAcl | Set-ServiceAcl

# Confirm the ACE was saved:
Get-ServiceAcl "Standalone SQL Agent" | select -ExpandProperty Access

After running this script from an elevated Powershell instance, you can test whether the startup stored procedure has enough privileges by invoking it manually.

If everything works as expected, you can restart the SQL Server Express instance and the Standalone SQL Agent service will restart as well.

In conclusion, Standalone SQL Agent is a good replacement for SQL Server Agent in Express edition and, while it suffers from some limitations, I still believe it’s the best option available.

COPY_ONLY backups and Log Shipping


Last week I was in the process of migrating a couple of SQL Server instances from 2008 R2 to 2012.

In order to let the migration complete quickly, I set up log shipping from the old instance to the new instance. Obviously, the existing backup jobs had to be disabled, otherwise they would have broken the log chain.

That got me thinking: was there a way to keep both “regular” transaction log backups (taken by the backup tool) and the transaction log backups taken by log shipping?

 ls_architecture

The first thing that came to my mind was the COPY_ONLY option available since SQL Server 2005.

You probably know that COPY_ONLY backups are useful when you have to take a backup for a special purpose, for instance when you have to restore from production to test. With the COPY_ONLY option, database backups don’t break the differential base and transaction log backups don’t break the log chain.

My initial thought was that I could ship COPY_ONLY backups to the secondary and keep taking scheduled transaction log backups with the existing backup tools.

I was dead wrong.

Let’s see it with an example on a TEST database.

I took 5 backups:

  1. FULL database backup, to initialize the log chain. Please note that COPY_ONLY backups cannot be used to initialize the log chain.
  2. LOG backup
  3. LOG backup with the COPY_ONLY option
  4. LOG backup
  5. LOG backup with the COPY_ONLY option

The backup information can be queried from backupset in msdb:

SELECT
     ROW_NUMBER() OVER(ORDER BY bs.backup_start_date) AS [backup #]
    ,first_lsn
    ,last_lsn
    ,backup_start_date
    ,type
    ,is_copy_only
    ,DENSE_RANK() OVER(ORDER BY type, bs.first_lsn) AS sequence
FROM msdb.dbo.backupset bs
WHERE bs.database_name = 'TEST'

 backupsets

As you can see, the COPY_ONLY backups don’t truncate the transaction log and losing one of those backups wouldn’t break the log chain.

However, all backups always start from the first available LSN, which means that scheduled log backups taken without the COPY_ONLY option truncate the transaction log and make significant portions of the transaction log unavailable in the next COPY_ONLY backup.

You can see it clearly in the following picture: the LSNs highlighted in red should contain no gaps in order to be restored successfully to the secondary, but the regular TLOG backups break the log chain in the COPY_ONLY backups.

backupsets2

That means that there’s little or no point in taking COPY_ONLY transaction log backups, as “regular” backups will always determine gaps in the log chain.

When log shipping is used, the secondary server is the only backup you can have, unless you keep the TLOG backups or use your backup tool directly to ship the logs.

Why on earth should one take a COPY_ONLY TLOG backup (more than one at least) is beyond my comprehension, but that’s a whole different story.

SQL Server services are gone after upgrading to Windows 8.1


Yesterday I upgraded my laptop to Windows 8.1 and everything seemed to have gone smoothly.

I really like the improvements in Windows 8.1 and I think they’re worth the hassle of an upgrade if you’re still on Windows 8.

As I was saying, everything seemed to upgrade smoothly. Unfortunately, today I found out that SQL Server services were gone.

My configuration manager looked like this:

ConfigMan1

My laptop had an instance of SQL Server 2012 SP1 Developer Edition and the windows upgrade process had deleted all SQL Server services but SQL Server Browser.

I thought that a repair would fix the issue, so I took out my SQL Server iso and ran the setup.

Unfortunately, during the repair process, something went wrong and it complained multiple times about “no mappings between Security IDs and account names” or something similar.

Anyway, the setup completed and the services were back in place, but were totally misconfigured.

ConfigMan1

SQL Server agent had start mode “disabled” and the service account had been changed to “localsystem” (go figure…)

After changing start mode and service accounts, everything were back to normal.

I hope this post helps others that are facing the same issue.

Check SQL Server logins with weak password


SQL Server logins can implement the same password policies found in Active Directory to make sure that strong passwords are being used.

Unfortunately, especially for servers upgraded from previous versions, the password policies are often disabled and some logins have very weak passwords.

In particular, some logins could have the password set as equal to the login name, which would by one of the first things I would try to hack a server.

Are you sure none of your logins has such a poor password?

PowerShell to the rescue!

try {
    if((Get-PSSnapin -Name SQlServerCmdletSnapin100 -ErrorAction SilentlyContinue) -eq $null){
        Add-PSSnapin SQlServerCmdletSnapin100
    }
}
catch {
    Write-Error "This script requires the SQLServerCmdletSnapIn100 snapin"
    exit
}

cls

# Query server names from your Central Management Server
$qry = "
SELECT server_name
FROM msdb.dbo.sysmanagement_shared_registered_servers
"

$servers = Invoke-Sqlcmd -Query $qry -ServerInstance "YourCMSServerGoesHere"

# Extract SQL Server logins
# Why syslogins and not sys.server_principals?
# Believe it or not, I still support a couple of SQL Server 2000
$qry_logins = "
SELECT loginname, sysadmin
FROM syslogins
WHERE isntname = 0
AND loginname NOT LIKE '##%##'
"

$dangerous_logins = @()

$servers | % {
    $currentServer = $_.server_name
    $logins = Invoke-Sqlcmd -Query $qry_logins -ServerInstance $currentServer

    $logins | % {

        $currentLogin = $_.loginname
        $isSysAdmin = $_.sysadmin

        try {
            # Attempt logging in with login = password
            $one = Invoke-Sqlcmd -Query "SELECT 1" -ServerInstance $currentServer -Username $currentLogin -Password $currentLogin -ErrorAction Stop
            # OMG! Login successful
            # Add the login to $dangerous_logins
            $info = @{}
            $info.LoginName = $currentLogin
            $info.Sysadmin = $isSysAdmin
            $info.ServerName = $currentServer
            $loginInfo = New-Object -TypeName PsObject -Property $info
            $dangerous_logins += $loginInfo
        }
        catch {
            # If the login attempt fails, don't add the login to $dangerous_logins
        }

    }
}

#display dangerous logins
$dangerous_logins

SQL2014: Defining non-unique indexes in the CREATE TABLE statement


Now that my SQL Server 2014 CTP1 virtual machine is ready, I started to play with it and some new features and differences with the previous versions are starting to appear.

What I want to write about today is a T-SQL enhancement to DDL statements that brings in some new interesting considerations.

SQL Server 2014 now supports a new T-SQL syntax that allows defining an index in the CREATE TABLE statement without having to issue separate CREATE INDEX statements.

Up to now, the same could be achieved only with PRIMARY KEY and UNIQUE constraints, thus allowing UNIQUE indexes only.

For instance, the following statement creates a table with a unique clustered index on order_id and a unique nonclustered index on PO_number:

CREATE TABLE #orders (
     order_id uniqueidentifier NOT NULL
        PRIMARY KEY CLUSTERED DEFAULT NEWSEQUENTIALID()
    ,PO_number varchar(50) NOT NULL UNIQUE
    ,order_date datetime NOT NULL
    ,total_amount decimal(18,3)
)

OK, but what if I want to add a non-unique index to my table?

SQL Server 2014 offers a new syntax to do that inline with the table DDL:

CREATE TABLE #orders (
     order_id uniqueidentifier NOT NULL
        PRIMARY KEY CLUSTERED DEFAULT NEWSEQUENTIALID()
    ,PO_number varchar(50) NOT NULL UNIQUE
     -- creates a nonclustered index on order_date
    ,order_date datetime NOT NULL INDEX IX_order_date
    ,total_amount decimal(18,3)
)

A similar syntax can be used to create a compound index:

CREATE TABLE #orders (
     order_id uniqueidentifier NOT NULL
        PRIMARY KEY CLUSTERED DEFAULT NEWSEQUENTIALID()
    ,PO_number varchar(50) NOT NULL UNIQUE
    ,order_date datetime NOT NULL INDEX IX_order_date
    ,total_amount decimal(18,3)
    -- creates a compound index on PO_number and order_date
    ,INDEX IX_orders_compound(PO_number, order_date)
)

An interesting aspect of this new syntax is that it allows creating non-unique nonclustered indexes to table variables, which is something that couldn’t be done in the previous versions.

The syntax to use is the same as for permanent tables:

DECLARE @orders TABLE (
     order_id uniqueidentifier NOT NULL
        PRIMARY KEY CLUSTERED DEFAULT NEWSEQUENTIALID()
    ,PO_number varchar(50) NOT NULL UNIQUE
    ,order_date datetime NOT NULL INDEX IX_order_date
    ,total_amount decimal(18,3)
)

Cool! But, wait: does this mean that table variables will now behave in the same way permanent tables do?

Not exactly.

Table variables don’t have statistics, and being able to create indexes on them won’t change anything in this regard.

Do you want a proof? OK, the skeptics can run the following code. Please make sure you capture the actual execution plan.

SET NOCOUNT ON;

-- create the table variable
DECLARE @orders TABLE (
     order_id uniqueidentifier NOT NULL
        PRIMARY KEY CLUSTERED DEFAULT NEWSEQUENTIALID()
    ,PO_number varchar(50) NOT NULL UNIQUE
    ,order_date datetime NOT NULL INDEX IX_order_date
    ,total_amount decimal(18,3)
)

-- insert some data
INSERT INTO @orders (order_date, PO_number, total_amount)
SELECT
     order_date   = DATEADD(second, CHECKSUM(NEWID()), GETDATE())
    ,PO_number    = CAST(NEWID() AS varchar(50))
    ,total_amount = CHECKSUM(NEWID()) / 1000.0
FROM sys.all_columns

-
SELECT COUNT(*)
FROM @orders
WHERE order_date > GETDATE()
OPTION (
     -- activate some (undocumented) trace flags to show
     -- statistics usage. More information on the flags
     -- can be found on Paul White's blog:
     -- http://sqlblog.com/blogs/paul_white/archive/2011/09/21/how-to-find-the-statistics-used-to-compile-an-execution-plan.aspx

     -- redirect output to the messages tab
     QUERYTRACEON 3604
     -- show "interesting" statistics
    ,QUERYTRACEON 9292
    -- show loaded statistics
    ,QUERYTRACEON 9402
    -- add RECOMPILE to let the optimizer "see"
    -- the table cardinality
    ,RECOMPILE
)

The output of the above batch is empty. Looks like no stats were loaded.

The actual execution plan confirms that no stats were loaded and the estimated cardinality of the table variable is way off:

plan_1

If we repeat the test with a temporary table, we see a different behaviour.

SET NOCOUNT ON;

IF OBJECT_ID('tempdb..#orders') IS NOT NULL
    DROP TABLE #orders;

CREATE TABLE #orders (
     order_id uniqueidentifier NOT NULL
        PRIMARY KEY CLUSTERED DEFAULT NEWSEQUENTIALID()
    ,PO_number varchar(50) NOT NULL UNIQUE
    ,order_date datetime NOT NULL INDEX IX_order_date
    ,total_amount decimal(18,3)
)

INSERT INTO #orders (order_date, PO_number, total_amount)
SELECT
     order_date   = DATEADD(second, CHECKSUM(NEWID()), GETDATE())
    ,PO_number    = CAST(NEWID() AS varchar(50))
    ,total_amount = CHECKSUM(NEWID()) / 1000.0
FROM sys.all_columns

SELECT COUNT(*)
FROM #orders
WHERE order_date > GETDATE()
OPTION (
     QUERYTRACEON 3604
    ,QUERYTRACEON 9292
    ,QUERYTRACEON 9402
)

This time the messages tab contains some output:

Stats header loaded: DbName: tempdb, ObjName: #orders, IndexId: 2, ColumnName: order_date, EmptyTable: FALSE

Stats header loaded: DbName: tempdb, ObjName: #orders, IndexId: 2, ColumnName: order_date, EmptyTable: FALSE

The optimizer identified the statistics on the oder_date column as “interesting” and then loaded the stats header.

Again, the actual execution plan confirms that a better estimation is available:

plan_2

The lack of statistics has always been the most significant difference between table variables and temporary tables and SQL2014 doesn’t appear to change the rules (yet).

A viable alternative to dynamic SQL in administration scripts


As a DBA, you probably have your toolbox full of scripts, procedures and functions that you use for the day-to-day administration of your instances.

I’m no exception and my hard drive is full of scripts that I tend to accumulate and never throw away, even if I know I will never need (or find?) them again.

However, my preferred way to organize and maintain my administration scripts is a database called “TOOLS”, which contains all the scripts I regularly use.

One of the challenges involved in keeping the scripts in a database rather than in a script file is the inability to choose a database context for the execution. When a statement is encapsulated in a view, function or stored procedure in a database, every reference to a database-specific object is limited to the database that contains the programmable object itself. The only way to overcome this limitation is the use of dynamic sql.

For instance, if I want to query the name of the tables in a database, I can use the following statement:

SELECT name FROM sys.tables

The statement references a catalog view specific to a single database, so if I enclose it in a stored procedure, the table names returned by this query are those found in the database that contains the stored procedure itself:

USE TOOLS;
GO

CREATE PROCEDURE getTableNames
AS
SELECT name FROM sys.tables;
GO

EXEC getTableNames;

This is usually not an issue, since most stored procedures will not cross the boundaries of the database they are created in. Administration scripts are different, because they are meant as a single entry point to maintain the whole SQL server instance.

In order to let the statement work against a different database, you need to choose one of the following solutions:

  1. dynamic SQL
  2. sp_executesql
  3. marking as a system object
  4. … an alternative way

Each of these techniques has its PROs and its CONs and I will try to describe them in this post.

1. Dynamic SQL

It’s probably the easiest way to solve the issue: you just have to concatenate the database name to the objects names.

USE TOOLS;
GO

ALTER PROCEDURE getTableNames @db_name sysname
AS
BEGIN
    DECLARE @sql nvarchar(max)
    SET @sql = 'SELECT name FROM '+ QUOTENAME(@db_name) +'.sys.tables';

    EXEC(@sql)
END
GO

EXEC getTableNames 'msdb';

PROS:

  • very easy to implement for simple statements

CONS:

  • can rapidly turn to a nightmare with big, complicated statements, as each object must be concatenated with the database name. Different objects have different ways to be related to the database: tables and views can be concatenated directly, while functions such as OBJECT_NAME accept an additional parameter to specify the database name.
  • the statement has to be treated as a string and enclosed in quotes, which means that:
    • quotes must be escaped, and escaped quotes must be escaped again and escaped and re-escaped quotes… ok, you know what I mean
    • no development aids such as intellisense, just-in-time syntax checks and syntax coloring

 

2. sp_executesql

It’s a neater way to avoid concatenating the database name to each object referenced in the statement.

USE TOOLS;
GO

ALTER PROCEDURE getTableNames @db_name sysname
AS
BEGIN

    -- use a @sql variable to store the whole query
    -- without concatenating the database name

    DECLARE @sql nvarchar(max);

    SET @sql = 'SELECT name FROM sys.tables';

    -- concatenate the database name to the
    -- sp_executesql call, just once

    DECLARE @cmd nvarchar(max);

    SET @cmd = 'EXEC '+ QUOTENAME(@db_name) +'.sys.sp_executesql @sql';

    EXEC sp_executesql @cmd, N'@sql nvarchar(max)', @sql

END
GO

EXEC getTableNames 'msdb';

PROS:

  • the dynamic sql is taken as a whole and does not need to be cluttered with multiple concatenations

CONS:

  • needs some more work than a straight concatenation and can be seen as “obscure”
  • suffers from the same issues found with plain dynamic sql, because the statement is, again, treated as a string

3. System object

Nice and easy: every stored procedure you create in the master database with the “sp_” prefix can be executed from any database context.

Using the undocumented stored procedure sp_MS_marksystemobject you can also mark the stored procedure as a “system object” and let it reference the tables in the database from which it is invoked.

USE master;
GO

ALTER PROCEDURE sp_getTableNames
AS
BEGIN
    SELECT name FROM sys.tables
END
GO

EXEC sys.sp_MS_marksystemobject 'sp_getTableNames'
GO

USE msdb;
GO

EXEC sp_getTableNames;

PROS:

  • no need to use dynamic sql

CONS:

  • requires creating objects in the “master” database, which is something I tend to avoid
  • works with stored procedures only (actually, it works with other objects, such as tables and views, but you have to use the “sp_” prefix. The day I will find a view named “sp_getTableNames” in the master database it won’t be safe to stay near me)

An alternative method:

It would be really helpful if we could store the statement we want to execute inside an object that doesn’t involve dynamic sql and doesn’t need to be stored in the master database. In other words, we need a way to get the best of both worlds.

Is there such a solution? Apparently, there isn’t.

The ideal object to store a statement and reuse it later is a view, but there is no way to “execute” a view against a different database. In fact you don’t execute a view, you just select from it, which is quite a different thing.

What you “execute” when you select from a view is the statement in its definition (not really, but let me simplify).

So, what we would need to do is just read the definition from a view and use the statement against the target database. Sounds straightforward, but it’s not.

The definition of a view also contains the “CREATE VIEW” statement and stripping it off is not just as easy as it seems.

Let’s see the issue with an example: I will create a view to query the last update date of the index statistics in a database, using the query from Glenn Berry’s Diagnostic Queries.

USE TOOLS;
GO

-- When were Statistics last updated on all indexes?  (Query 48)
CREATE VIEW statisticsLastUpdate
AS
SELECT
     DB_NAME() AS database_name
    ,o.NAME AS stat_name
    ,i.NAME AS [Index Name]
    ,STATS_DATE(i.[object_id], i.index_id) AS [Statistics Date]
    ,s.auto_created
    ,s.no_recompute
    ,s.user_created
    ,st.row_count
FROM sys.objects AS o WITH (NOLOCK)
INNER JOIN sys.indexes AS i WITH (NOLOCK)
    ON o.[object_id] = i.[object_id]
INNER JOIN sys.stats AS s WITH (NOLOCK)
    ON i.[object_id] = s.[object_id]
    AND i.index_id = s.stats_id
INNER JOIN sys.dm_db_partition_stats AS st WITH (NOLOCK)
    ON o.[object_id] = st.[object_id]
    AND i.[index_id] = st.[index_id]
WHERE o.[type] = 'U';

I just had to remove ORDER BY and OPTION(RECOMPILE) because query hints cannot be used in views.

Querying the object definition returns the whole definition of the view, not only the SELECT statement:

SELECT OBJECT_DEFINITION(OBJECT_ID('statisticsLastUpdate')) AS definition
definition
-------------------------------------------------------------------
-- When were Statistics last updated on all indexes?  (Query 48)
CREATE VIEW statisticsLastUpdate
AS
SELECT
     DB_NAME() AS database_name
    ,o.NAME AS stat_name
    ,i.NAME AS [Index Name]
    ,STATS_DATE(i.[object_id], i.index_id) AS [Statistics Date]
    ,s.auto_created
    ,s.no_recompute

(1 row(s) affected)

In order to extract the SELECT statement, we would need something able to parse (properly!) the view definition and we all know how complex it can be.

Fortunately, SQL Server ships with an undocumented function used in replication that can help solving the problem: its name is fn_replgetparsedddlcmd.

This function accepts some parameters, lightly documented in the code: fn_replgetparsedddlcmd (@ddlcmd, @FirstToken, @objectType, @dbname, @owner, @objname, @targetobject)

Going back to the example, we can use this function to extract the SELECT statement from the view definition:

SELECT master.sys.fn_replgetparsedddlcmd(
    OBJECT_DEFINITION(OBJECT_ID('statisticsLastUpdate'))
        ,'CREATE'
        ,'VIEW'
        ,DB_NAME()
        ,'dbo'
        ,'statisticsLastUpdate'
        ,NULL
) AS statement
statement
---------------------------------------------------------------------
AS
SELECT
     DB_NAME() AS database_name
    ,o.NAME AS stat_name
    ,i.NAME AS [Index Name]
    ,STATS_DATE(i.[object_id], i.index_id) AS [Statistics Date]
    ,s.auto_created
    ,s.no_recompute
    ,s.user_created
    ,st.row_count

(1 row(s) affected)

The text returned by the function still contains the “AS” keyword, but removing it is a no-brainer:

DECLARE @cmd nvarchar(max)
SELECT @cmd = master.sys.fn_replgetparsedddlcmd(
    OBJECT_DEFINITION(OBJECT_ID('statisticsLastUpdate'))
        ,'CREATE'
        ,'VIEW'
        ,DB_NAME()
        ,'dbo'
        ,'statisticsLastUpdate'
        ,NULL
    )

SELECT @cmd = RIGHT(@cmd, LEN(@cmd) - 2) -- Removes "AS"

SELECT @cmd AS statement
statement
-------------------------------------------------------------------

SELECT
     DB_NAME() AS database_name
    ,o.NAME AS stat_name
    ,i.NAME AS [Index Name]
    ,STATS_DATE(i.[object_id], i.index_id) AS [Statistics Date]
    ,s.auto_created
    ,s.no_recompute
    ,s.user_created
    ,st.row_count

(1 row(s) affected)

Now that we are able to read the SELECT statement from a view’s definition, we can execute that statement against any database we like, or even against all the databases in the instance.

-- =============================================
-- Author:      Gianluca Sartori - spaghettidba
-- Create date: 2013-04-16
-- Description: Extracts the view definition
--              and runs the statement in the
--              database specified by @db_name
--              If the target database is a pattern,
--              the statement gets executed against
--              all databases matching the pattern.
-- =============================================

CREATE PROCEDURE [dba_execute_view]
     @view_name sysname
    ,@db_name sysname
AS
BEGIN

SET NOCOUNT,
    XACT_ABORT,
    QUOTED_IDENTIFIER,
    ANSI_NULLS,
    ANSI_PADDING,
    ANSI_WARNINGS,
    ARITHABORT,
    CONCAT_NULL_YIELDS_NULL ON;
    SET NUMERIC_ROUNDABORT OFF;

DECLARE @cmd nvarchar(max)
DECLARE @sql nvarchar(max)

DECLARE @vw_schema sysname
DECLARE @vw_name sysname

IF OBJECT_ID(@view_name) IS NULL
BEGIN
    RAISERROR('No suitable object found for name %s',16,1,@view_name)
    RETURN
END

IF DB_ID(@db_name) IS NULL
    AND @db_name NOT IN ('[USER]','[SYSTEM]')
    AND @db_name IS NOT NULL
BEGIN
    RAISERROR('No suitable database found for name %s',16,1,@view_name)
    RETURN
END

SELECT @vw_schema = OBJECT_SCHEMA_NAME(OBJECT_ID(@view_name)),
    @vw_name = OBJECT_NAME(OBJECT_ID(@view_name))

SELECT @cmd = master.sys.fn_replgetparsedddlcmd(
    OBJECT_DEFINITION(OBJECT_ID(@view_name))
        ,'CREATE'
        ,'VIEW'
        ,DB_NAME()
        ,@vw_schema
        ,@vw_name
        ,NULL
    )

SELECT @cmd = RIGHT(@cmd, LEN(@cmd) - 2) -- Removes "AS"

-- CREATE A TARGET TEMP TABLE
SET @sql = N'
    SELECT TOP(0) * INTO #results FROM ' + @view_name + ';

    INSERT #results
    EXEC [dba_ForEachDB]
        @statement = @cmd,
        @name_pattern = @db_name;

    SELECT * FROM #results;'

EXEC sp_executesql
     @sql
    ,N'@cmd nvarchar(max), @db_name sysname'
    ,@cmd
    ,@db_name

END

The procedure depends on dba_ForEachDB, the stored procedure I posted a couple of years ago that replaces the one shipped by Microsoft. If you still prefer their version, you’re free to modify the code as you wish.

Now that we have a stored procedure that “executes” a view, we can use it to query statistics update information from a different database:

EXEC [dba_execute_view] 'statisticsLastUpdate', 'msdb'

single_db

We could also query the same information from all user databases:

EXEC [dba_execute_view] 'statisticsLastUpdate', '[USER]'

multidb

That’s it, very easy and straightforward.

Just one suggestion for the SELECT statements in the views: add a DB_NAME() column, in order to understand where the data comes from, or it’s going to be a total mess.

Next steps:

This is just the basic idea, the code can be improved in many ways.

For instance, we could add a parameter to decide whether the results must be piped to a temporary table or not. As you probably know, INSERT…EXEC cannot be nested, so you might want to pipe the results to a table in a different way.

Another thing you might want to add is the ability to order the results according to an additional parameter.

To sum it up, with a little help from Microsoft, we can now safely create a database packed with all our administration stuff and execute the queries against any database in our instance.

Follow

Get every new post delivered to your Inbox.

Join 472 other followers