Blog Archives

Workload analysis with WorkloadTools

Last week I introduced WorkloadTools and promised additional posts to describe what it can do in more detail. So, here we go: in this post I will show you how to use WorkloadTools to capture a workload and analyze it, in order to find things like the most resource-intensive queries or discover how the application behaves over time.

Capturing the workload

First of all, you need to capture the workload using SqlWorkload. As stated in the documentation, you need to provide a .json file that contains all the parameters needed to configure the Listener and the Consumers.

Configuring the Listener

You have two types of listeners to choose from in this case, and the choice will be determined mainly by the version of SQLServer that you’re working on: with older versions (up to SQLServer 2008 R2), you will have no choice but to use a SqlTraceWorkloadListener, because Extended Events do not provide the required events (sql_batch_completed in particular).
With newer versions (from SQLServer 2012 onwards), you can use an ExtendedEventsWorkloadListener.

Regardless of the type of Listener that you choose, there is not much information that you need to provide: the name of the server to connect to and the credentials are usually more than enough. The following json fragment shows a sample Listener configuration:

    // This line decides which type of Listener to use
    // Put the name of the class of your choice in this param
    "__type": "ExtendedEventsWorkloadListener",
        // Server name and credentials
        "ServerName": "SQLDEMO\\SQL2014",
        "DatabaseName": "master",
        // Omit UserName and Password to use
        // Windows Authentication
        "UserName": "sa",
        "Password": "P4$$w0rd!"
    // If you want to reduce the workload,
    // put some filters here
    "DatabaseFilter": "DS3"

Some things to note:

  • The .json file accepts comments, even if the JSON standard does not allow comments. The file gets minified before reading, so all comments get stripped away. Comments are useful for describing what the file does and why it configures things in that particular way.
  • Each Listener type accepts a different set of parameters. See the documentation for more details. Not all parameters are mandatory (optional parameters and their default value are described in square brackets).
  • You do not need to configure, start and stop a particular trace or Extended Events session: SqlWorkload takes care of everything for you. If you really wanted to, you could override the default trace/session script (but most of the times you don’t want to do that)
  • Filtering the source events is a good idea in several situations: it helps breaking down complex workloads by database, application, login or host.

Regardless of the type of Listener in use, the workload does not need to get completely written to a SqlTrace or Extended Events file before it can be used by the Listener, but the events are read from the Listener source incrementally, using different techniques, that depend on the Listener type.

For instance, for an ExtendedEventsWorkloadListener, the Listener uses the XE streaming API to read the events without having to save them to a file target first. For a SqlTraceWorkloadListener, the events need to go to a trace file first, but SqlWorkload makes sure to use small rollover files and always reads the tail of the last available file. The same happens with ExtendedEventsWorkloadListener when the FileTargetPath property is specified (this is mandatory for Azure SqlDatabase and must point to a blog storage URL).

Configuring the Consumers

All the events captured by the Listener are forwarded to all the Consumers registered on the Listener itself. You could have multiple Consumers for the same Listener, but in this example one Consumer is sufficient.

Your goal is to capture the workload on the source server and extract some information about the shape of the workload itself, looking for resource-intensive queries, long-running queries or queries that have a high number of executions. The correct Consumer type for this is the AnalysisConsumer.

This Consumer type can be configured in many aspects, but in its simplest form, it just needs to receive the name of the server, database and schema where the workload analysis tables will be created and populated. The schema name is important, because you may want to compare different benchmarks and the easiest way to do this is to save each benchmark to a different schema.

// Consumers are contained in a list
        // Choose the appropriate type of Consumer
        "__type": "AnalysisConsumer",
        // Fill in connection information
            "ServerName": "SQLDEMO\\SQL2016",
            "DatabaseName": "SqlWorkload01",
            "SchemaName": "capture",
            "UserName": "sa",
            "Password": "P4$$w0rd!"
        // This parameter controls how often the consumer
        // aggregates performance data and writes to the
        // analysis database
        "UploadIntervalSeconds": 60

UploadIntervalSeconds is another important parameter, because it controls how often the workload analysis data is written to the destination database. AnalysisConsumer uses this parameter to time the creation of “intervals”: inside each interval, the performance data is aggregated by query, application, database, host name and login name.

The queries also get normalized during the analysis process: each constant or literal inside the query text gets replaced by a placeholder, which allows to aggregate together queries that differ only by the constant values. During the normalization process, SqlWorkload also calculates a 64 bit query hash, that can be used to retrieve the normalized query text and a sample query text captured before the normalization occurs.

Please note that, at the time of writing, the destination database does not get created automatically and needs to be created upfront. I am planning to remove this limitation, so keep an eye on this issue.

Putting it all together and starting SqlWorkload

Now that Listener and Consumer are configured, you just need to put it all together in a single .json file and pass it to SqlWorkload.

Here is the complete sqlworkload.json:

    "Controller": {

            "__type": "ExtendedEventsWorkloadListener",
                "ServerName": "SQLDEMO\\SQL2014",
                "DatabaseName": "master",
                "UserName": "sa",
                "Password": "P4$$w0rd!"
            "DatabaseFilter": "DS3"

                "__type": "AnalysisConsumer",
                    "ServerName": "SQLDEMO\\SQL2016",
                    "DatabaseName": "SqlWorkload01",
                    "SchemaName": "capture",
                    "UserName": "sa",
                    "Password": "P4$$w0rd!"
                "UploadIntervalSeconds": 60

Now that file is ready, save it (e.g. analyze.json) and use it to run SqlWorkload:

"%programfiles%\workloadtools\sqlworkload.exe" --File "%cd%\analyze.json"

If everything is ok, SqlWorkload will start to print messages to the standard output of your cmd window:


SqlWorkload also writes to a log file (SqlWorkload.log), that you will find in the working directory. You can control the log level and other logging parameters by editing NLog.config in the installation folder.

Every “UploadIntervalSeconds”, SqlWorkload will create a new interval, aggregate the performance data and upload everything to the analysis databases. You will see a message similar to this when the upload occurs:


When you think that the capture has lasted long enough, you can stop SqlWorkload by pressing CTRL+C. If you close the command window, it will not clean up the objects created for the capture, like Extended Events sessions or SqlTrace captures.

Another option is to specify a value for the optional parameter TimeoutMinutes on the Listener configuration: when the timeout expires, the capture will stop automatically.

Visualizing performance data with WorkloadViewer

Now that the analysis database contains the performance data, you can use WorkloadViewer to visualize it and draw your conclusions.

WorkloadViewer is a GUI tool that reads performance data from the analysis database and gives a graphical representation using charts and grids. It accepts a number of command line arguments that allow to automate its behavior, but it can be also opened without specifying any arguments: in this case, WorkloadViewer will present a form to fill the missing information.

WorkloadViewer can be used to visualize information about a single benchmark (analysis mode) or two benchmarks (comparison mode). In this case, you just need to work with a single benchmark, so it is enough to enter the connection info on the left, including the schema name where the tables are. When using Windows Authentication, you can leave UserName and Password blank.


WorkloadViewer shows data in three tabs:

  • Workload
  • Queries
  • Query Details

The “Workload” tab shows three charts for total CPU, total duration and batches per second for each interval. This is useful to see how the workload evolves over time and identify any correlation between the three metrics.


The charts support panning (click and drag with the right mouse button) and zooming (mouse wheel) as well as zooming to a particular area (click and drag with the middle mouse button). To reset zoom and pan, double click with the middle mouse button.

The “Queries” tab contains a grid that displays important metrics about the individual queries: avg/sum CPU, avg/sum Duration, avg/sum reads, total number of executions. These metrics are calculated over the entire workload.


You can sort by a particular column by clicking the header: this allows you to rank queries by one of the metrics and identify immediately the top queries by that metric.

Clicking on one of the rows displays the query text on the bottom panel. Double clicking one of the rows takes you to the “Query Details” tab.


This tab displays information about the individual query: it contains the query text and the overall performance metrics, broken down by application name, database name, host name and login name.

The chart displays the average CPU usage over time.

WorkloadViewer is still heavily being worked on and it is not showing all the data available in the analysis database, but I hope to include more information in the future. If you feel adventurous enough, you can point PowerBI to the analysis database and produce a dashboard designed on your specific needs.


As you can see, WorkloadTools can be a valuable help when you need to analyze a workload. SqlWorkload allows you to capture all the data that you need with extreme ease, without having to configure anything other than basic connection information.

WorkloadViewer allows you to visualize the data in a convenient way and can give you an immediate overview of your workload, ranking queries by the most important performance metrics.

But this is only one of the many tasks that you can accomplish with WorkloadTools: in the weeks to come I will continue to write on this topic and I will show you how to unleash the full potential of WorkloadTools.

Stay tuned!

A bug in merge replication with FILESTREAM data

I wish I could say that every DBA has a love/hate relationship with Replication, but, let’s face it, it’s only hate. But it could get worse: it could be Merge Replication. Or even worse: Merge Replication with FILESTREAM.

What could possibly top all this hatred and despair if not a bug? Well, I happened to find one, that I will describe here.

The scenario

I published tables with FILESTREAM data before, but it seems like there is a particular planetary alignment that triggers an error during the execution of the snapshot agent.

This unlikely combination consists in a merge article with a FILESTREAM column and two UNIQUE indexes on the ROWGUIDCOL column. Yes, I know that generally it does not make sense to have two indexes on the same column, but this happened to be one of the cases where it did, so we had a CLUSTERED PRIMARY KEY on the uniqueidentifier column decorated with the ROWGUIDCOL attribute and, on top, one more NONCLUSTERED UNIQUE index on the same column, backed by a UNIQUE constraint.

Setting up the publication does not throw any error, but generating the initial snapshot for the publication does:

Cannot create, drop, enable, or disable more than one constraint,
column, index, or trigger named 'ncMSmerge_conflict_TestMergeRep_DataStream'
in this context. Duplicate names are not allowed.

Basically, the snapshot agent is complaining about the uniqueness of the name of one of the indexes it is trying to create on the conflict table. The interesting fact about this bug is that it doesn’t appear when the table has no FILESTREAM column and it doesn’t appear when the table doesn’t have the redundant UNIQUE constraint on the ROWGUID column: both conditions need to be met.

The script

Here is the full script to reproduce the bug.

Before you run it, make sure that:

  1. FILESTREAM is enabled
  2. Distribution is configured

USE master;
IF DB_ID('TestMergeRep') IS NOT NULL
exec TestMergeRep.sys.sp_dropmergepublication @publication=N'TestMergeRep';
PRINT 'sp_dropmergepublication failed'
exec TestMergeRep.sys.sp_removedbreplication 'TestMergeRep', 'merge';
PRINT 'sp_removedbreplication failed'
exec TestMergeRep.sys.sp_replicationdboption @dbname = N'TestMergeRep', @optname = N'merge publish', @value = N'false';
PRINT 'sp_replicationdboption failed'
DECLARE @path nvarchar(128), @sql nvarchar(max);
SELECT @path = LEFT(physical_name, LEN(physical_name) CHARINDEX('\', REVERSE(physical_name),1) + 1)
FROM sys.database_files
WHERE type = 0
SET @sql = '
( NAME = N''TestmergeRep_FS01'', FILENAME = ''' + @path + 'TestMergeRep_FS01'' , MAXSIZE = UNLIMITED)
TO FILEGROUP [TestmergeRep_FileStream01];
USE TestMergeRep;
CREATE TABLE [dbo].[DataStream](
[DataStreamGUID] [uniqueidentifier] ROWGUIDCOL NOT NULL,
[ValueData] [varbinary](max) FILESTREAM NOT NULL,
[DataStreamGUID] ASC
I know it doesn't make sense, but the bug only shows
when the table has a second UNIQUE constraint on the PK column
WORKAROUND: create the UNIQUE index without creating the UNIQUE constraint:
USE master
EXEC sp_replicationdboption
@dbname = N'TestMergeRep',
@optname = N'merge publish',
@value = N'true';
use [TestMergeRep]
exec sp_addmergepublication
@publication = N'TestMergeRep',
@description = N'Merge publication of database TestMergeRep.',
@retention = 30,
@sync_mode = N'native',
@allow_push = N'true',
@allow_pull = N'true',
@allow_anonymous = N'false',
@enabled_for_internet = N'false',
@conflict_logging = N'publisher',
@dynamic_filters = N'false',
@snapshot_in_defaultfolder = N'true',
@compress_snapshot = N'false',
@ftp_port = 21,
@ftp_login = N'anonymous',
@conflict_retention = 14,
@keep_partition_changes = N'false',
@allow_subscription_copy = N'false',
@allow_synctoalternate = N'false',
@add_to_active_directory = N'false',
@max_concurrent_merge = 0,
@max_concurrent_dynamic_snapshots = 0,
@publication_compatibility_level = N'100RTM',
@use_partition_groups = N'false';
exec sp_addpublication_snapshot
@publication = N'TestMergeRep',
@frequency_type = 1,
@frequency_interval = 1,
@frequency_relative_interval = 1,
@frequency_recurrence_factor = 1,
@frequency_subday = 1,
@frequency_subday_interval = 5,
@active_start_date = 0,
@active_end_date = 0,
@active_start_time_of_day = 10000,
@active_end_time_of_day = 235959;
exec sp_addmergearticle
@publication = N'TestMergeRep',
@article = N'DataStream',
@source_owner = N'dbo',
@source_object = N'DataStream',
@type = N'table',
@description = null,
@column_tracking = N'true',
@pre_creation_cmd = N'drop',
@creation_script = null,
@schema_option = 0x000000010C034FD1,
@article_resolver = null,
@subset_filterclause = N'',
@vertical_partition = N'false',
@destination_owner = N'dbo',
@verify_resolver_signature = 0,
@allow_interactive_resolver = N'false',
@fast_multicol_updateproc = N'true',
@check_permissions = 0,
@identityrangemanagementoption = 'manual',
@delete_tracking = N'true',
@stream_blob_columns = N'false',
@force_invalidate_snapshot = 1;
Sets all merge jobs owned by sa
DECLARE @job_id uniqueidentifier
SELECT job_id
FROM msdb.dbo.sysjobs AS sj
INNER JOIN msdb.dbo.syscategories AS sc
ON sj.category_id = sc.category_id
WHERE = 'REPL-Merge';
EXEC msdb.dbo.sp_update_job @job_id=@job_id , @owner_login_name=N'sa'

After running the script, start the snapshot agent and you’ll see the error appearing:



One way to get rid of the bug is to enforce the uniqueness of the data by using a UNIQUE index instead of a UNIQUE constraint:

ON [DataStream] ([DataStreamGUID]);

With this index, the snapshot agent completes correctly. Please note that the index would have been UNIQUE anyway, because its key is a superset of the primary key.

Hope this helps!

Please Vote!

This bug has been filed on UserVoice and can be found here:

Please upvote it!

How To Enlarge Your Columns With No Downtime

Let’s face it: column enlargement is a very sensitive topic. I get thousands of emails every month on this particular topic, although most of them end up in my spam folder. Go figure…

The inconvenient truth is that enlarging a fixed size column is a long and painful operation, that will make you wish there was a magic lotion or pill to use on your column to enlarge it on the spot.


Unfortunately, there is no such magic pill, but turns out you can use some SQL Server features to make the column enlargement less painful for your users.

First, let’s create a table with one smallint column, that we will try to enlarge later.

-- Go to a safe place
USE tempdb;

    DROP TABLE EnlargeTest;

-- Create test table
CREATE TABLE EnlargeTest (
    SomeColumn smallint

-- Insert 1 million rows
INSERT INTO EnlargeTest (SomeColumn)
SELECT TOP(1000000) 1
FROM master.dbo.spt_values AS A
CROSS JOIN master.dbo.spt_values AS B;

If you try to enlarge this column with a straight “ALTER TABLE” command, you will have to wait for SQLServer to go through all the rows and write the new data type. Smallint is a data type that is stored in 2 bytes, while int requires 4 bytes, so SQL Server will have to enlarge each and every row to accommodate 2 extra bytes.

This operation requires some time and also causes a lot of page splits, especially if the pages are completely full.


-- Enlarge column
ALTER TABLE EnlargeTest ALTER COLUMN SomeColumn int;



(1000000 rows affected)
SQL Server parse and compile time:
   CPU time = 0 ms, elapsed time = 0 ms.
Table 'EnlargeTest'. Scan count 9, logical reads 3001171, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 8, logical reads 2064041, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 SQL Server Execution Times:
   CPU time = 13094 ms,  elapsed time = 11012 ms.


The worst part of this approach is that, while the ALTER TABLE statement is running, nobody can access the table.

To overcome this inconvenience, there is a magic column enlargement pill that your table can take, and it’s called Row Compression.

Row compression pills. They used to be expensive, SQL Server 2016 SP1 made them cheap.

Let’s try and revert to the original column size and take the Row Compression pill:

-- Let's revert to smallint
ALTER TABLE EnlargeTest ALTER COLUMN SomeColumn smallint;

-- Add row compression

With Row Compression, your fixed size columns can use only the space needed by the smallest data type where the actual data fits. This means that for an int column that contains only smallint data, the actual space usage inside the row is 1 or 2 bytes, not 4.

This is exactly what you need here:


-- Let's try to enlarge the column again
ALTER TABLE EnlargeTest ALTER COLUMN SomeColumn int;



 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.


Excellent! This time the command completes instantly and the ALTER COLUMN statement is a metadata only change.

The good news is that Row Compression is available in all editions of SQL Server since version 2016 SP1 and compression can be applied by rebuilding indexes ONLINE, with no downtime (yes, you will need Enteprise Edition for this).

The (relatively) bad news is that I tested this method on several versions of SQL Server and it only works on 2016 and above. Previous versions are not smart enough to take the compression options into account when enlarging the columns and they first enlarge and then reduce the columns when executing the ALTER COLUMN command. Another downside to this method is that row compression will refuse to work if the total size of your columns exceeds 8060 bytes, as the documentation states.

Bottom line is: painless column enlargement is possible, if you take the Row Compression pills. Just don’t overdo it: you don’t want to enlarge your columns too much, do you?

Introducing XESmartTarget

Three years ago, I started to work on a project called ExtendedTsqlCollector. I blogged about it multiple times here on

Even if I received positive feedback, I knew that one aspect was slowing down the adoption of the project: the Data Collector. That particular feature of SQL Server has a very bad reputation for being difficult to set up, customize and monitor. ExtendedTSQLCollector tried to address some of those issues, but the damage was already done and I knew that the Data Collector was already on the verge of extinction.

So, how could I preserve the work done for that project and, at the same time, offer DBAs the ability to set up complex actions in response to events? What I aimed to release was a tool capable of streaming events from a session and perform configurable actions in response to those events, mainly writing to a database table and sending alerts. The tool I had in mind should have to be configured in a simple and straightforward way, possibly with a single configuration file.

So, back to the drawing board. The tool I came up with had ditched the data collector, but it kept as much as possible from the previous project.

Introducing XESmartTarget

I am proud to introduce XESmartTarget: the easiest way to interact with extended events sessions without writing a single line of code.

XESmartTarget uses the Extended Events streaming API to connect to a session and execute some actions in response to each event. The actions you perform can be configured in a simple .json file, which controls the Response types and all their properties.

For instance, if you have a session to capture deadlocks, you may want to be notified whenever such an event is captured. All you need to do is configure XESmartTarget to send an email using the EmailResponse class. You can do that by creating a simple .json file with all the parameters needed:

"Target": {
"ServerName": "(local)\\SQL2014",
"SessionName": "deadlocks",
"FailOnProcessingError": false,
"Responses": [
"__type": "EmailResponse",
"SMTPServer": "localhost",
"Sender": "",
"To": "dba@localhost.localdomain",
"Subject": "Deadlock occurred",
"Body": "Deadlock occurred at {collection_time}",
"Attachment": "xml_report",
"AttachmentFileName": "deadlock.xdl",
"HTMLFormat": true,
"Events": [
view raw deadlocks.json hosted with ❤ by GitHub

XESmartTarget will connect to your session and will execute an EmailResponse for every event you capture. What will you see? Not much, because XESmartTarget is a console application that you are supposed to run in the background:


What you will actually see is email messages flowing to your inbox:


This is just an example of what XESmartTarget can do: you already have Response types to write events to a database table or replay execution-related events to a target instance. More Response types are in the works as well, but the good news is that XESmartTarget is open source, so you can code your own ad contribute it to the project on GitHub.

What are you waiting for? Download and try XESmartTarget now!

Installing SQL Server 2016 Language Reference Help from disk

A couple of years ago I blogged about Installing the SQL Server 2014 Language Reference Help from disk.

With SQL Server 2016 things changed significantly: we have the new Help Viewer 2.2, which is shipped with the Management Studio setup kit.

However, despite all the changes in the way help works and is shipped, I am still unable to download and install help content from the web, so I resorted to using the same trick that I used for SQL Server 2014.

This time the URLs and the files to download are different:

    1. Point your browser to
    2. Download the Language Reference Files:

      If you’re a PowerShell person, these three lines will do:

Invoke-WebRequest -Uri "" `
	-OutFile "sql_2016_branding_en-us(1bd6e667-f159-ac3b-f0a5-964c04ca5a13).cab"
Invoke-WebRequest -Uri "" `
	-OutFile "v2sql_shared_language_reference_b4621_sql_130_en-us_1(83748a56-8810-751f-d453-00c5accc862d).cab"
Invoke-WebRequest -Uri "" `
	-OutFile "v2sql_shared_language_reference_b4621_sql_130_en-us_2(ccc38276-b744-93bd-9008-fe79b294ff41).cab"
      1. Create a text file name HelpContentSetup.msha in the same folder as the .cab files and paste the following html:
<html xmlns="">
<head />
<body class="vendor-book">
    <div class="details">
        <span class="vendor">Microsoft</span>
        <span class="locale">en-us</span>
        <span class="product">SQL Server 2016</span>
        <span class="name">Microsoft SQL Server Language Reference</span>
    <div class="package-list">
        <div class="package">
            <span class="name">SQL_2016_Branding_en-US</span>
            <span class="deployed">False</span>
            <a class="current-link" href="sql_2016_branding_en-us(1bd6e667-f159-ac3b-f0a5-964c04ca5a13).cab">sql_2016_branding_en-us(1bd6e667-f159-ac3b-f0a5-964c04ca5a13).cab</a>
        <div class="package">
            <span class="name">v2SQL_Shared_Language_Reference_B4621_SQL_130_en-us_1</span>
            <span class="deployed">False</span>
            <a class="current-link" href="v2sql_shared_language_reference_b4621_sql_130_en-us_1(83748a56-8810-751f-d453-00c5accc862d).cab">v2sql_shared_language_reference_b4621_sql_130_en-us_1(83748a56-8810-751f-d453-00c5accc862d).cab</a>
        <div class="package">
            <span class="name">v2SQL_Shared_Language_Reference_B4621_SQL_130_en-us_2</span>
            <span class="deployed">False</span>
            <a class="current-link" href="v2sql_shared_language_reference_b4621_sql_130_en-us_2(ccc38276-b744-93bd-9008-fe79b294ff41).cab">v2sql_shared_language_reference_b4621_sql_130_en-us_2(ccc38276-b744-93bd-9008-fe79b294ff41).cab</a>
      1. First, set the Help Viewer to open help from the local sources:
      2. Then select the “Add and Remove Help Content” command:
      3. This command opens the Help Viewer and asks for the content to add.
        Browse to the file you created in step 3.
        Click “Add” on all the items you wish to add to the library. In this case you will have only 1 item.
        When done, click the “Update” button.
      4. Unfortunately, during the installation phase of the library item, something crashes and the installation won’t proceed until you tell it to ignore or report the error.
      5. Despite the crash, everything works as expected and you will find the topic installed in your help library:

Here it is, nice and easy. Hope it works for you too.

SSMS is now High-DPI ready

One of the most popular posts on this bog describes how to enable bitmap scaling is SSMS on high DPI displays, which is a sign that more and more people are starting to use 4K displays and are unhappy with SSMS’s behaviour at high DPI. The solution described in that post is to enable bitmap scaling, which renders graphic objects correctly, at the price of some blurriness.

The good news is that starting with SSMS 16.3 high DPI displays are finally first class citizens and SSMS does its best to scale objects properly. By default, SSMS will keep using bitmap scaling: in order to enable DPI scaling you will have to use a manifest file.

  1. Merge this key to your registry:
Windows Registry Editor Version 5.00[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SideBySide]
  1. Save this manifest file to “C:\Program Files (x86)\Microsoft SQL Server\130\Tools\Binn\ManagementStudio\Ssms.exe.manifest” using UTF-8 format:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" xmlns:asmv3="urn:schemas-microsoft-com:asm.v3">
        <asmv3:windowsSettings xmlns="">
            <assemblyIdentity type="win32" name="Microsoft.Windows.Common-Controls" version="" processorArchitecture="X86" publicKeyToken="6595b64144ccf1df" language="*" />
            <assemblyIdentity type="win32" name="debuggerproxy.dll" processorArchitecture="X86" version=""></assemblyIdentity>

This is a huge improvement over the bitmap scaling solution we had to use up to now: no more blurriness and proper fonts are used in SSMS.

For comparison, this is how bitmap scaling renders in SSMS 2014:


And this is how DPI scaling renders is SSMS 16.3, with scaling set to 200%:


As you can see, it’s not perfect yet (for instance, I had to change the grid font size to 9pt. in order to have readable fonts).

However, the GUI is much more readable now. For instance, look at the difference in object explorer: (click on the image to open fullsize and see the difference)


Now that your favourite tool is working in high DPI displays, nothing is holding you back from buying one of those fancy 4K laptops!

Upcoming Speaking Engagements

The next few months will be crazy for me. I will be travelling a lot around Europe, speaking about SQL Server topics.

Here is where you will find me in the upcoming weeks:

SQLNexus is a new conference in Copenhagen and will be the official launch event in Denmark for SQL Server 2016.
I will speak about “Responding to Extended Events in near Real-Time”.

SQLBits doesn’t need to be introduced. It’s the biggest SQL Server event in Europe and it will also be the official SQL Server 2016 launch event in UK.
Again, I will speak about “Responding to Extended Events in near Real-Time”.

NTK is a famous conference in Slovenia, which has been running for many years now. This year it will be held in Portorosz, near the Italian border.
It’s going to be fun: I will speak about “Benchmarking, Baselining and Workload Analysis”.

INSIDE-SQL is also a new event, run this year for the first time. It’s an event focused on in-depth concepts, delivered in 75 minutes sessions, with long breaks between sessions, so that attendees can go to the next session without rushing and can hang out with the speakers and ask questions.
I will deliver two sessions:
“Responding to Extended Events in near Real-Time”
“Advanced T-SQL Techniques”

Free SQL Server training from the 40th floor of the Tour Montparnasse in Paris.What else can I say?
I will be speaking about “New Security Features in SQL Server 2016”

One of the coolest SQLSats around!
This time I’ll be speaking about “SQL Server Infernals”, AKA “how to torture your SQL Server instances to death with worst practices”. It’s going to be fun!

I hope to see you at these super cool events!

An annoying Bug in the Restore Dialog

Today, thanks to a customer, I discovered  one of those annoying little things that can really drive you nuts.

Basically, they were trying to restore a backup using the SSMS Restore Database window and they kept getting “No backupset selected to be restored” whenever a backup file was selected.

You just had to select a file for restore and click OK…


… to be met with an error message in the Restore Database window:


The weird thing about it is that the backup file restored perfectly fine from a T-SQL script:


So it had to be something wrong with SSMS, but what?

Looking closer at the restore script, one thing stands out. Look at the file name:


Yep, there’s a leading whitespace in the file name. Could that be the source of the problem?

Let’s try again with the GUI in a slightly different way. This time I will copy the folder path from the “Backup File Location” textbox…


… and paste it directly in the “File name” textbox, right before the file name:


This time everything works as expected.

Bottom line:

  1. This is a bug in SSMS: go on and vote this Connect item to have it fixed in a future version.
  2. Don’t use the GUI to restore a database.
  3. Don’t use the GUI at all.

SSMS in High-DPI Displays: How to Stop the Madness

I spent the last few weeks travelling a lot compared to my usual routine and I got quickly fed up with bringing my mobile workstation with me. My Dell precision M4600 is a powerhouse, but it’s definitely too big and too heavy for travelling, so I decided it was time to buy a small laptop just for travelling and presenting at SQL Saturdays and events around Europe.

After lots of research, I pulled the trigger on a new XPS13. It’s a very nice, tiny and light machine, beefy enough to run my virtual machines and slim enough to fit in a small bag.

So, everything fine? Not quite. The XPS13 2015 sports a stunning QHD touch display, with almost no bezel: a small wonder. The only complaint with it is windows and its (in)ability to scale applications when running at High DPI.

Windows got better at scaling applications on HiDPI displays and Windows 10 is doing a remarkably good job at scaling applications that can actually scale. I am running at maximum resolution (3200 x 1800) with 200% scale factor and, for instance, Visual Studio 2015 renders perfectly:


Other applications scale horribly. For instance, SSMS (SQL Server Management Studio) 2014 is a complete disaster:


At a first glance, it looks ok, but look at those icons on the Object Explorer tree: aren’t they a little bit too small compared to the labels next to them?

I can live with small icons, but you just need to open some dialog windows to understand how bad things really are:


As you can see, text gets scaled correctly, but the size of the window and some controls are completely wrong.

While some dialog windows are easily fixed by resizing them, some other are completely unusable. For instance, the restore database dialog, when locating a backup file, looks like this:


I’m not the first one to discover this kind of issues: Scott Hanselman posted his impressions quite a long time ago, but he offered no solution.

There’s also a Connect item that describes this issue, but… ok, you know what I’m about to say.

A new hope from an old technique

In Windows Vista, you had two possible ways of scaling applications: with the first one (the default) applications were instructed to scale their objects using the scaling factor imposed by the operating system. The results, depending on the quality of the application and the Windows version, could vary a lot. Some scaled correctly, some other look very similar to what we are seeing in SSMS, with some weird-looking GUIs. In Vista, this option was called “XP style DPI scaling”.

The second option, which you could activate by unchecking the “XP style” checkbox, involved drawing the graphical components of the GUI to an off-screen buffer and then drawing them back to the display, scaling the whole thing up to the screen resolution. This option is called “bitmap scaling” and the result is a perfectly laid out GUI:


In order to enable this option in Windows 10, you need to merge this key to your registry:

Windows Registry Editor Version 5.00


Then, the application has to be decorated with a manifest file that instructs Windows to disable DPI scaling and enable bitmap scaling, by declaring the application as DPI unaware. The manifest file has to be saved in the same folder as the executable (ssms.exe) and its name must be ssms.exe.manifest. In this case, for SSMS 2014, the file path is “C:\Program Files (x86)\Microsoft SQL Server\120\Tools\Binn\ManagementStudio\Ssms.exe.manifest”.

Paste this text inside the manifest file and save it in UTF8 encoding:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>

<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" xmlns:asmv3="urn:schemas-microsoft-com:asm.v3">

    <assemblyIdentity type="win32" name="Microsoft.Windows.Common-Controls" version="" processorArchitecture="*" publicKeyToken="6595b64144ccf1df" language="*">

    <assemblyIdentity type="win32" name="Microsoft.VC90.CRT" version="9.0.21022.8" processorArchitecture="amd64" publicKeyToken="1fc8b3b9a1e18e3b">

<trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">
      <requestedExecutionLevel level="asInvoker" uiAccess="false"/>

  <asmv3:windowsSettings xmlns="">
    <ms_windowsSettings:dpiAware xmlns:ms_windowsSettings="">false</ms_windowsSettings:dpiAware>


This “Vista style” bitmap scaling is very similar to what Apple is doing on his Retina displays, except that Apple uses a different font rendering algorithm that looks better when scaled up. If you use this technique in Windows, ClearType rendering is performed on the off-screen buffer before upscaling, so the final result might look a bit blurry.

The amount of blurriness you will see depends on the scale factor you set in the control panel or in the settings app in Windows 10. Needless to say that exact pixel scaling looks better, so prefer 200% over 225% or 250% scale factors, because there is no such thing as “half pixel”.

Seeing is believing:

Bitmap scaling at 200%

Bitmap scaling at 200%

Bitmap scaling at 225%

Bitmap scaling at 225%

Bitmap scaling at 250%

Bitmap scaling at 250%

That’s it: nice and easy. Enjoy your High DPI lifestyle, until Microsoft comes up with a better solution (or a better SSMS).