Data Warehousing, BI and Data Science

1 February 2010

How long did it run?

Filed under: SQL Server — Vincent Rainardi @ 8:15 am

A common way to measure how long a SQL statement ran is using SQL Profiler. Alternatively we can also use the getdate() function. My preference is to use Set Statistics. In this post I will describe these 3 methods.

Using SQL Profiler

To use SQL Profiler, start a new trace and go to the event selection tab. Select ‘Show all events’ and ‘Show all columns’. Scroll from top to buttom and clear the check boxes on every event. On the ‘SQL Statement completed’ event, select these columns: Duration, EndTime, StartTime, TextData, and clear all other columns as shown below.

To display on the SQL statement(s) that you are running, click on the Column Filters and edit the TextData mentioning a keyword on your statements, like this:

Click Run to run the trace. Then run you SQL Statement(s). The output is like this:

In the above trace I run the same SQL statement 3 times. As you can see they took 17.558, 15.933 and 14.921 seconds respectively.

Using getdate()

Sometimes we can’t use SQL Profiler, for example if we want to programmatically run the measurements. When measuring, I run it several times and take the average. I usually trow away the first 3 measurements to make sure that the system stabilises first, for example to eliminate the effect of caching. Also to eliminate ‘overshoot’ and ‘ringing’, as illustrated by a time response diagram below (from Wikipedia):

To record how long a SQL statement runs I just use getdate(), then datediff in milliseconds, before converting them to seconds, as follows:

declare @start time, @finish time, @counter int, @duration float

— get the start time
set @start = getdate()

— do something
set @counter = 0
while @counter <= 1000
set @counter = @counter + 1

— get the finish time
set @finish = getdate()

— calculate the duration in millisecond, then translate to seconds
set @duration = datediff(ms, @start, @finish)/1000.0

— display the start and finish times, and the duration
print @start
print @finish
print @duration — can also write to a table

The measurements from this getdate method are about 0.12% (on average) of the SQL Profiler outputs, as shown in following: (first number: getdate, second: profiler) (17580, 17588), (15940, 15933), (14933, 14921), (16397, 16329), (29377, 29360), (33326, 33309).

Using Set Statistics

The command is quite simple: “set statistics time on“. Then we execute the SQL Statement we want to measure. For example:

set statistics time on
D1.Attr1, D2.Attr1, D3.Attr1, sum(measure1), sum(measure2)
from FactPKNonClus F
inner join dim1 D1 on D1.dimkey1 = F.dimkey1
inner join dim2 D2 on D2.dimkey2 = F.dimkey2
inner join dim3 D3 on D3.dimkey3 = F.dimkey3
where D1.Attr2 between ‘Value1’ and ‘Value2’
and D2.Attr2 between ‘Value1’ and ‘Value2’
and D3.Attr2 between ‘Value1’ and ‘Value2’
group by D1.Attr1, D2.Attr1, D3.Attr1

The result is not on the Results tab, but on the Messages tab:

SQL Server parse and compile time:
time = 250 ms, elapsed time = 649 ms.
(106 row(s) affected)
SQL Server Execution Times:
CPU time = 3906 ms, elapsed time = 8180 ms.

It’s the elapsed time on the SQL Server execution times that we are looking for (8180 ms). That’s the duration when the SQL statement ran.

The parse and compile time will reduce when you rerun the SQL statement. And eventually they will become zero. This is because the statement would have already been parsed and compiled. For example, when we execute the above SQL statement (just the select statement, no need to re-execute the set statistics) 2 more times, we will get:

SQL Server parse and compile time:
time = 0 ms, elapsed time = 0 ms.
106 row(s) affected)
SQL Server Execution Times:
CPU time = 3766 ms, elapsed time = 7468 ms.

Notice that the elapsed time also decreses (from 8180 to 7468 ms). When measuring, ideally we need to run it several times until the execution time is more or less stable. Stable means it does not go down any more, but goes up and down around a certain point. At this point we then need to measure several times (say 3 times) then take an average.

To switch off the set statistics, the command is “set statistics time off“.



  1. […] measure2) select col1, col1+1, col1+2, col2, col2+7 from NewData –100k rows in 3.013s, see here for measurement […]

    Pingback by Primary Key and Clustered Index on the Fact Table « Data Warehousing and Business Intelligence — 28 February 2010 @ 4:38 pm | Reply

  2. […] method: see here. Leave a […]

    Pingback by Impact of Clustering the Fact Key to Query Performance « Data Warehousing and Business Intelligence — 28 February 2010 @ 5:24 pm | Reply

  3. […] method: see here. Leave a […]

    Pingback by Impact of Clustering the Fact Key to Query Performance « Data Warehousing and Business Intelligence — 17 June 2010 @ 5:19 am | Reply

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at

%d bloggers like this: