Data Platform and Data Science

25 May 2010

Optimising Cube Query Performance and Processing Performance

Filed under: Analysis Services — Vincent Rainardi @ 7:43 am
Tags:

Background

Several times this question cropped up: how do you optimise the query performance of a large cube. Eight out of ten, when people ask this question, they mean “my users are complaining that their reports are slow”. Report being Excel, SSRS or PPS. Slow means usually it’s 1 second but today it’s 1 minute.

The other 20% is genuine cube improvement. They are cube developers who are looking for ways to make their cubes better and more performing. Usually, they also seek for ways to optimise the processing time (the cube refresh time).

In this article I’ll try to (briefly) explain where/what to look, i.e. just the pointers. I’ll write my experience and thoughts so readers can get the idea (high level), and provide reference to a more complete source so if you need more detail information, as per how etc.

Query Performance

I usually look/work in this order: MDX queries, aggregation, partitioning, server settings. First I’d look at the MDX query, see if it’s 1 specific query (or a few) that causes the issue. If it does, then I’d try optimise that query. The 2 most frequent things are: 1) cell by cell instead of block computation, and 2) attribute from the same dim are not “next to each other”, causing number of tuples to exceed 4.2 billion (read Mosha’s blog on this, and my blog). We can capture the MDX using profiler. No OlapQueryLog doesn’t give us the MDX (the 101010… column corresponds to the attribute order).

If the MDX queries are OK, I’ll look at the aggregation. If it’s 2008 then design the aggregation for each MG. By partition if possible. Usage based if possible. Then number 3 to look at is partitioning. I’ll try to either split the MG so that the majority of the queries hit a small partition (current month for example), or split the MG so that the queries are distributed evenly across many partitions (by product for example). Or do both.

Last is server settings. There are 3 major areas: memory, disk/IO, CPU. If the AS server is on the same server as SQL, look at memory counters. Limit the SQL memory if possible, if not, move AS out to another server. Dedicated if possible. Also look at the upper/lower memory limit settings. And processing vs query. Also Windows vs SSAS heap (look under Memory Heap Type on 2008 perf guide).

For IO we can look at Threads\Query\MaxThreads (refer to 2008 perf  guide for settings), ThreadPool\QueryPoolJobQueueLength, IdleThreads, BusyThreads. When configuring a new AS server, better choose RAID 1+0 over RAID 5 or 1. AS data files are not as big as SQL data files. If a big query blocks many small queries set the Coordinator Query Balancing Factor (and Boost Priority). If there is ROLAP dim or MG, look at the underlying table/view.

If you want to be systematic, first determine if it’s Storage Engine or Query Processor. We can do this using eventsubclass 22 on Profiler. If it’s SE issue, look at: aggregate, partition and dimension (hierarchy and attribute relationship). If it’s QP, look at: MDX queries, cache warming, server settings e.g. IO, memory. Cache warming: create cache or run the query.

Processing Performance

First look at the processing output window, see if the issue is dimension or MG. If it’s dimension issue, create proper dim table (instead of select distinct from the transaction/fact table), process default, materialised/indexed view (to provide distinct values).

If it’s MG issue, look at: partition the MG (so simultaneous read happens), physical table partitioning (align them to the MG partitions), incremental processing (only process partitions that’s changed – need to flag this on ETL), aggregation (are they required? Compare with usage. Use Profiler, log output to table).

If building cubes from operational systems, having materialized view can improve the performance hundreds of times. The issue is, there are so many restrictions, we can’t use many things when writing the select statement. And, they may not allow us to create view there on the ops sys DB.

Align the table partition to the MG partition means: if the MG partition is on month, then the table partition should be on month too. This way, when the MG is being processed, all MG partitions are reading simultaneously from different table partition, avoiding IO contention. There’s no point partitioning the table if all MG partitions will be hitting the same table partition – it will only bring the SQL Server to its knees.

——————————–

Apologies I write this in a hurry, on my way to work – hence it’s kind of unedited, rush writing style. But I do hope it’s helpful for some. If you spot anything inaccurate, I’d be grateful if you’d point it out. There are many SSAS experts (see my who’s who list) who may have different opinions, I’d be more than happy to listen to their advise.

4 Comments »

  1. […] wrote about performance tuning on SSAS about 5 months ago. That was on both processing performance and query performance. This time I […]

    Pingback by Tuning Cube Processing Performance « Data Warehousing and Business Intelligence — 19 November 2010 @ 8:23 am | Reply

  2. […] think what I wrote here [Optimising Cube Query and Processing Performance] is quite comprehensive to be used as a base for our questions. As per its title, it covers the […]

    Pingback by SSAS Developer Interview Questions « Data Warehousing and Business Intelligence — 30 November 2010 @ 9:47 pm | Reply

  3. Here are this and some other articles on SSAS Performance:

    http://ssas-wiki.com/w/Articles#Performance

    Comment by Sam Kane — 21 March 2011 @ 2:01 pm | Reply

  4. […] Optimising Cube Query Performance and Processing Performance […]

    Pingback by Performance tips for SSAS | James Serra's Blog — 26 December 2011 @ 6:11 am | Reply


RSS feed for comments on this post. TrackBack URI

Leave a comment

Blog at WordPress.com.