About Gerhard Brueckl

Gerhard has been working with Microsoft BI tools since 2006 mainly focusing on Microsoft SQL Server and its components. As a consultant and architect he designed various enterprise BI solutions primary in the German-speaking areas. His personal interest has always been on analytical databases and their capabilities. From the start on he has always been active in the Analysis Services community and in 2012 he achieved the SSAS Maestro certification. This was also about the same time when he started his own blog and began speaking at international conferences all over the world.

C# Wrapper for Power BI REST API


UPDATE 2017-05-18:
I released a new version of this project and also published it on GitHub: https://github.com/gbrueckl/PowerBI.API.Client
A blog post which refers to the updates can be found here.


Since the last major update last year, Power BI offers some APIs which can be used to interact with content and also data that is stored in Power BI. Microsoft provides a good set of samples on how to use the APIs on GitHub and also a an interactive APIARY web-UI which you can use to build and test API calls on-the-fly. However, it can still be quite cumbersome as you have to deal with all the REST API calls and the returned JSON on your own. So I decided to write a little C# Wrapper where you simply pass in your Azure AD Application Client ID and you can deal with all Object of the Power BI API as they were regular C# objects.

Here is a little example on how to list all available reports and get the EmbedURL of a given tile using the PowerBIClient:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using pmOne.PowerBI;
using pmOne.PowerBI.PowerBIObjects;

namespace SampleApplication
{
    class Program
    {
        static void Main(string[] args)
        {
            PowerBIClient pbic = new PowerBIClient(“ef4aed1a-9cab-4bb3-94ea-ffffffffffff”);

            Console.WriteLine(“Available Reports:”);
            foreach(PBIReport pbir in pbic.Reports)
            {
                Console.WriteLine(pbir.Name);
            }

            Console.WriteLine();
            Console.WriteLine(“Get EmbedURL for Tile [Retail Analysis Sample].[This Year’s Sales]”);
            Console.WriteLine(pbic.GetDashboardByName(“Retail Analysis Sample”).GetTileByName(“This Year’s Sales”).EmbedURL);

            Console.WriteLine(“Press <Enter> to exit …”);
            Console.ReadLine();
        }
    }
}

As you can see, its pretty simple and very easy to use, even for non-developers. You can find all the source-code and the sample application for download below. The code as I have written it is very likely not the best code possible, but it works for my needs, is straight forward, simple and saves me a lot of work and time when dealing with the PowerBI API. Also, if the API changes, you may need to adopt the code accordingly. However, for the future I hope that Microsoft provides some metadata so that VisualStudio can build all this code automatically using e.g. Swagger. But for the time being feel free to use, improve or extend my code Smile

SourceCode: PowerBIClient_Source.zip

Open Analysis Services Tabular Database Online

Those of you who have been working with SSAS Multidimensional in the past probably know that you can connect online to their SSAS database via Visual Studio / Data Tools.
Open_SSAS_DB_Visual_Studio

Any change you make (and save) online, will be directly deployed to the server and is the visible to the end-user immediately. This can be very convenient if you want to quickly check something or do some hot-fixes (e.g. changing the MDX script).  But be aware, structural changes might require you to process the changed and dependent objects so be sure about what you are changing online, especially if you are connecting to a productive environment!

I am quite sure that everyone who works with SSAS Tabular has also tried this feature for his Tabular database and ended up with the following error message:
”You are trying to connect to <servername> server running in tabular mode using the Tabular Model Designer. The option to open an Analysis Services Database is supported for servers running multidimensional mode only.”
Open_SSAS_DB_Visual_Studio_Tabular

So this simply does not work out of the box. However, there is a neat workaround which allows you to connect to your online SSAS Tabular database and do any changes you want. The idea behind this is to use the online database as our workspace database.

The first thing to do is to open Visual Studio / Data Tools and import the existing database into a new Project:
Visual_Studio_Import_SSAS_DB_Tabular

Then you need to select your workspace server. If there is no pop-up asking your for a workspace server, then you have already configured a default one which will be used in this case. As we are going to change this in the next step again, it does not really matter which workspace server you choose.

Now you are asked for the database which you want to import – choose the one that you want to connect to online. Once the import process is finished, Visual Studio already creates a workspace database for you and names it as follows: “<VS ProjectName>_<NT-Username>_<random GUID>” – in my case it was “TabularProject1_gbrueckl_b44f11de-21f4-4d18-bf67-0c25652fceba”. Any change you make in Visual Studio will be deployed directly to this database. Closing Visual Studio will unload the workspace database from memory by default.

Having all this information you probably can imagine where all this is leading to. The workspace database name and server can be configured and are persisted in the user-specific “Model.bim_<user>.settings”-file located in your project folder:

Model.bim_gbrueckl.settings
<?xml version=1.0 encoding=utf-8?>
<ModelUserSettings xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance xmlns:xsd=http://www.w3.org/2001/XMLSchema>
  <ServerName>localhost\TAB2014</ServerName>
  <DatabaseName>AdventureWorksDW2012_Online_gbrueckl_b44f11de-21f4-4d18-bf67-0c25652fceba</DatabaseName>
  <DbRetention>OnDisk</DbRetention>
  <SnapshotBackup>DoNotKeepBackup</SnapshotBackup>
  <Annotations />
  <IsRecalcRequired>false</IsRecalcRequired>
  <IsImpersonationModified>false</IsImpersonationModified>
  <CheckForImpersonationWarning>false</CheckForImpersonationWarning>
  <RequirePastedTablesUpgrade>false</RequirePastedTablesUpgrade>
  <TruncatedTables />
  <IsPowerPivotMetadataScriptExecuted>false</IsPowerPivotMetadataScriptExecuted>
  <IsASImport>false</IsASImport>
  <IsPowerPivotImport>false</IsPowerPivotImport>
  <SelectedCompatibilityLevel>300</SelectedCompatibilityLevel>
</ModelUserSettings>

The settings you need to change here should be obvious – <ServerName> and <DatabaseName>. Change them to match your online SSAS Tabular server and database – the one you previously imported the project from.
But be sure to also change the <DbRetention>! Leaving the default “OnDisk” here would unload your database once you close Visual Studio what is definitely not what you want! You need to set this setting to “InMemory” to keep the database in memory – remember, we want to connect online but keep the database accessible once we are done.

Before you do all this changes you should close your Visual Studio solution completely to ensure there is nothing cached internally. Then simply open the .settings-file, do the changes described above and re-open your solution. If you have done everything correctly you should already see data for all of your tables:
Visual_Studio_SSAS_Tabular_with_Data
This is already the live-data that resides in your online database!

Congratulations! You are now connected online to your SSAS Tabular Database!

This approach can also be very useful if you are working with a backup of a SSAS Tabular database as it allows you to make online changes to the restored database and see all the data that exists in the database. Importing only the database project without this little hack would leave you with an empty database project which is very hard to work with if you need to create new calculations. Further, this also does not require you to reprocess the whole database which might not even be possible if you have no connection the the data sources underneath!

But before you get too much excited about this, there are some more things to keep in mind:

  • This is not officially supported by Microsoft
  • This was just experimental but proved (for me) to be very handy in some scenarios
  • Opening a SSAS Tabular Solution in Visual Studio sends and ALTER statement to the workspace database (in this case this is your productive database!) and updates the server with the metadata defined in your local .bim-file. If your server database changes frequently, this is probably not what you want as you would overwrite changes done by someone else recently. To work around this issue you would need to re-create/import your SSAS project every time before making any online changes to make sure you are always re-deploying the current state of the database when opening your local SSAS project.
  • I am not responsible for any data loss, damage or whatsoever!

If you want to use this approach to deploy hot-fixes and this happens frequently, you may also consider using a more professional approach for this – for example the BISM Normalizer Visual Studio Add-In which allows you to select the changes that you want to deploy to a target server, similar to schema compare for SQL Server.

Calculating Pearson Correlation Coefficient using DAX

The original request for this calculation came from one of my blog readers who dropped me a mail asking if it possible to calculated the Pearson Correlation Coefficient (PCC or PPMCC) in his PowerPivot model. In case you wonder what the Pearson Correlation Coefficient is and how it can be calculated – as I did in the beginning –  these links What is PCC, How to calculate PCC are very helpful and also offer some examples and videos explaining everything you need to know about it. I highly recommend to read the articles before you proceed here as I will not go into the mathematical details of the calculation again in this blog which is dedicated to the DAX implementation of the PCC.

UPDATE 2017-06-04:
Daniil Maslyuk posted an updated version of the final calculation using DAX 2.0 which is much more readable as it is using variables instead of separate measures for every intermediate step. His blog post can be found at https://xxlbi.com/blog/pearson-correlation-coefficient-in-dax/

Anyway, as I know your time is precious, I will try to sum up its purpose for you: “The Pearson Correlation Coefficient calculates the correlation between two variables over a given set of items. The result is a number between -1 and 1. A value higher than 0.5 (or lower than –0.5) indicate a strong relationship whereas numbers towards 0 imply weak to no relationship.”

The two values we want to correlate are our axes, whereas the single dots represent our set of items. The PCC calculates the trend within this chart represented as an arrow above.

The mathematical formula that defines the Pearson Correlation Coefficient is the following:

The PCC can be used to calculate the correlation between two measures which can be associated with the same customer. A measure can be anything here, the age of a customer, it’s sales, the number of visits, etc. but also things like sales with red products vs. sales with blue products. As you can imagine, this can be a very powerful statistical KPI for any analytical data model. To demonstrate the calculation we will try to correlate the order quantity of a customer with it’s sales amount. The order quantity will be our [MeasureX] and the sales will be our [MeasureY], and the set that we will calculate the PCC over are our customers. To make the whole calculation more I split it up into separate measures:

  1. MeasureX := SUM(‘Internet Sales’[Order Quantity])
  2. MeasureY := SUM(‘Internet Sales’[Sales Amount])

Based on these measures we can define further measures which are necessary for the calculation of our PCC. The calculations are tied to a set if items, in our case the single customers:

  1. Sum_XY := SUMX(VALUES(Customer[Customer Id]), [MeasureX] * [MeasureY])
  2. Sum_X2 := SUMX(VALUES(Customer[Customer Id]), [MeasureX] * [MeasureX])
  3. Sum_Y2 := SUMX(VALUES(Customer[Customer Id]), [MeasureY] * [MeasureY])
  4. Count_Items := DISTINCTCOUNT(Customer[Customer Id])

Now that we have calculated the various summations over our base measures, it is time to create the numerator and denominator for our final calculation:

  1. Pearson_Numerator :=
  2.     ([Count_Items] * [Sum_XY]) – ([MeasureX] * [MeasureY])
  3. Pearson_Denominator_X :=
  4.     ([Count_Items] * [Sum_X2]) – ([MeasureX] * [MeasureX])
  5. Pearson_Denominator_Y :=
  6.     ([Count_Items] * [Sum_Y2]) – ([MeasureY] * [MeasureY])
  7. Pearson_Denominator :=
  8.     SQRT([Pearson_Denominator_X] * [Pearson_Denominator_Y])

Having these helper-measures in place the final calculation for our PCC is straight forward:

  1. Pearson := DIVIDE([Pearson_Numerator], [Pearson_Denominator])

This [Pearson]-measure can then be used together with any attribute in our model – e.g. the Calendar Year in order to track the changes of the Pearson Correlation Coefficient over years:

Pearson by Year

For those of you who are familiar with the Adventure Works sample DB, this numbers should not be surprising. In 2005 and 2006 the Adventure Works company only sold bikes and usually a customer only buys one bike – so we have a pretty strong correlation here. However, in 2007 they also started selling Clothing and Accessories which are in general cheaper than Bikes but are sold more often.

Pearson by Year and Category

This has impact on our Pearson-value which is very obvious in the screenshots above.

As you probably also realized, the Grand Total of our Pearson calculation cannot be directly related to the single years and may also be the complete opposite of the single values. This effect is called Simpson’s Paradox and is the expected behavior here.

[MeasuresX] and [MeasureY] can be exchanged by any other DAX measures which makes this calculation really powerful. Also, the set of items over which we want to calculated the correlation can be exchanged quite easily. Below you can download the sample Excel workbook but also a DAX query which could be used in Reporting Services or any other tool that allows execution of DAX queries.

Sample Workbook (Excel 2013): Pearson.xlsx
DAX Query: Pearson_SSRS.dax

Upcoming Events I am speaking at in June 2015

I am very glad that I got selected as a speaker for two upcoming SQL Saturdays in June. First there is the SQL Saturday #409 in Rheinland, Germany  on June 13 and the week after the SQL Saturday #419 in Bratislava, Slovakia on June 20.

For those of you who are new to the concept of PASS SQL Saturdays, this is a series of free-of-charge events all around the globe where experienced speakers talk about all topics around the Microsoft SQL Server platform and beyond. As I said, its free, you just need to register in time in order to get a ticked so better be fast before all slots are taken!

SQLSaturday_409_Rheinland_Germany

I will do a session together with my colleague Markus Begerow (b, t) on “Power BI on SAP HANA” – two technologies I got to work a lot with recently. We are going to share our experience on how to use Power BI to extract data from SAP HANA, the different interfaces you can use and the advantages and drawbacks of each. Even tough it is considered a general sessions, we will also do a lot of hands on and elaborate on some of the technical details you need to be aware of, for both, the Power BI side and also for SAP HANA.

 

SQLSaturday_419_Bratislava_Slovakia

In Bratislava I will speak about Lessons Learned: SSAS Tabular in the real world where I will present the technical and non-technical findings I made in the past when implementing SSAS Tabular models at larger scales for various customers. I will cover the whole process from choosing SSAS Tabular as your engine (or not choosing it), things to consider during implementation and also shed some light on the administrative challenges once the solution is in production.

I think both are really interesting sessions and I would be happy to see a lot of you there and have some interesting discussions!

Recursive Calculations in PowerPivot using DAX

If you have ever tried to implement a recursive calculations in DAX similar to how you would have done it back in the good old days of MDX (see here) you would probably have come up with a DAX formula similar to the one below:

  1. Sales ForeCast :=
  2. IF (
  3.     NOT ( ISBLANK ( [Sales] ) ),
  4.     [Sales],
  5.     CALCULATE (
  6.         [Sales ForeCast],
  7.         DATEADD ( 'Date'[Calendar], 1, MONTH )
  8.     ) * 1.05
  9. )

However, in DAX you would end up with the following error:

A circular dependency was detected: ‘Sales'[Sales ForeCast],’Sales'[Sales ForeCast].

This makes sense as you cannot reference a variable within its own definition – e.g. X = X + 1 cannot be defined from a mathematical point of view (at least according to my limited math skills). MDX is somehow special here where the SSAS engine takes care of this recursion by taking the IF() into account.

So where could you possible need a recursive calculation like this? In my example I will do some very basic forecasting based on monthly growth rates. I have a table with my actual sales and another table for my expected monthly growth as percentages. If I do not have any actual sales I want to use my expected monthly growth to calculate my forecast starting with my last actual sales:

GeneralLogic

This is a very common requirement for finance applications, its is very easy to achieve in pure Excel but very though to do in DAX as you probably realized on your own what finally led you here Smile

In Excel we would simply add a calculation like this and propagate it down to all rows:
ExcelFormula
(assuming column C contains your Sales, D your Planned Growth Rate and M is the column where the formula itself resides)

In order to solve this in DAX we have to completely rewrite our calculation! The general approach that we are going to use was already explained by Mosha Pasumansky some years back, but for MDX. So I adopted the logic and changed it accordingly to also work with DAX. I split down the solution into several steps:
1) find the last actual sales – April 2015 with a value of 35
2) find out with which value we have to multiply our previous months value to get the current month’s Forecast
3) calculate the natural logarithm (DAX LN()-function) of the value in step 2)
4) Sum all values from the beginning of time until the current month
5) Raise our sum-value from step 4) to the power of [e] using DAX EXP()-function
6) do some cosmetic and display our new value if no actual sales exist and take care of aggregation into higher levels

Note: The new Office 2016 Preview introduces a couple of new DAX functions, including PRODUCTX() which can be used to combine the Steps 3) to 5) into one simple formula without using any complex LN() and EXP() combinations.

Step 1:
We can use this formula to get our last sales:

  1. Last Sales :=
  2. IF (
  3.     ISBLANK (
  4.         CALCULATE (
  5.             [Sales],
  6.             DATEADD ( 'Date'[DateValue], 1, MONTH )
  7.         )
  8.     ),
  9.     [Sales],
  10.     1
  11. )

It basically checks if there are no [Sales] next month. If yes, we use the current [Sales]-value as our [Last Sales], otherwise we use a fixed value of 1 as a multiplication with 1 has no impact on the final result.

Step 2:
Get our multiplier for each month:

  1. MultiplyBy :=
  2. IF (
  3.     ISBLANK ( [Last Sales] ),
  4.     1 + [Planned GrowthRate],
  5.     [Last Sales]
  6. )

If we do not have any [Last Sales], we use our [Planned GrowthRate] to for our later multiplication/summation, otherwise take our [Last Sales]-value.

Step 3 and 4:
As we cannot use “Multiply” as our aggregation we first need to calculate the LN and sum it up from the first month to the current month:

  1. Cumulated LN :=
  2. CALCULATE (
  3.     SUMX ( VALUES ( 'Date'[Month] ), LN ( [MultiplyBy] ) ),
  4.     DATESBETWEEN (
  5.         'Date'[DateValue],
  6.         BLANK (),
  7.         MAX ( 'Date'[DateValue] )
  8.     )
  9. )

 

Step 5 and 6:
If there are no actual sales, we display our calculated Forecast:

  1. Sales ForeCast :=
  2. SUMX (
  3.     VALUES ( 'Date'[Month] ),
  4.     IF ( ISBLANK ( [Sales] ), EXP ( [Cumulated LN] ), [Sales] )
  5. )

Note that we need to use SUMX over our Months here in order to also get correct subtotals on higher levels, e.g. Years. That’s all the SUMX is necessary for, the IF itself should be self-explaining here.

 

So here is the final result – check out the last column:
FinalPivot

The calculation is flexible enough to handle missing sales. So if for example we would only have sales for January, our recursion would start there and use the [Planned GrowthRate] already to calculate the February Forecast-value:
FinalPivot2

Quite handy, isn’t it?

The sample-workbook (Excel 365) can be downloaded here: RecursiveCalculations.xlsx

Excel CUBE-Functions and MDX UniqueNames

Two weeks ago at the German SQL Server Conference 2015 I was at Peter Myer’s session about Mastering the CUBE Functions in Excel. (PS: Peter is also speaking on our upcoming SQLSaturday #374 in Vienna next week and at PASS SQLRally in Copenhagen the week after). After his session we had a further discussion about this topic and our experiences on how to use Excels CUBE-functions in order to build nice Dashboards with native Excel functionalities that also work with e.g. Excel Services. Its always great to exchange with people that share the same passion on he same topic! One thing we both agreed on that is missing currently is a way to get the MDX UniqueName of something that is selected in a slicer, filter or simply in a cell using CUBEMEMBER-function. I once used a special Cube Measure which was created in MDX Script which returned the UniqueName of a given member that was selected together with this special measure. For this to work with Excel you need to know how Excel builds the MDX when querying cube values using CUBEVALUE-function. Here is a little example:
Excel_CubeValue_Formula
This produces the following MDX query:

  1. SELECT
  2. {
  3.     (
  4.         [Measures].[Internet Sales Amount],
  5.         [Product].[Category].&[1]
  6.     )
  7. } ON 0
  8. FROM [Adventure Works]
  9. CELL PROPERTIES VALUE, FORMAT_STRING, LANGUAGE, BACK_COLOR, FORE_COLOR, FONT_FLAGS

So it basically creates a tuple that contains everything you pass into the CUBEVALUE-Function as a parameter. Knowing this we can create a calculated measure to get the MDX UniqueName of this tuple using MDX StrToTuple()- and MDX AXIS()-function:

  1. MEMBER [Measures].[Excel TupleToStr] AS (
  2. TupleToStr(axis(0).item(0))
  3. )

Replacing the [Measures].[Internet Sales Amount] of our initial CUBEVALUE-function with this new measure would return this to Excel:

  1. ([Measures].[Internet Sales Amount],[Product].[Category].&[1])

 

Ok, so far so good but nothing really useful as you need to hardcode the member’s UniqueName into the CUBEVALUE-function anyway so you already know the UniqueName.
However, this is not the case if you are dealing with Pivot Table Page Filters and/or Slicers! You can simply refer to them within the CUBEVALUE-function but you never get the UniqueName of the selected item(s). Well, at least not directly! But you can use the approach described above, using an special MDX calculated measure, to achieve this as I will demonstrate on the next pages.

Calculated measures can only be created using the Pivot Table interface but can also be used in CUBE-functions. So first thing you need to do is to create a Pivot Table and add a new MDX Calculated Measure:
Excel_Create_MDX_calculated_measure

!Caution! some weird MDX coming !Caution!


Excel_Create_MDX_calculated_measure2

You may wonder, why such a complex MDX is necessary and what it actually does. What it does is the following: Based on the example MDX query that Excel generates (as shown above) this is a universal MDX that returns the MDX UniqueName of any other member that is selected together with our measure using the CUBEVALUE-function. It also removes the UniqueName of the measure itself so the result can be used again with any other measure, e.g. [Internet Sales Amount]
The reason why it is rather complex is that Excel may group similar queries and execute them as a batch/as one query to avoid too many executions which would slow down the overall performance. So we cannot just reference the first element of our query as it may belong to any other CUBEVALUE-function. This MDX deals with all this kinds of issues.

The MDX above allows you to specify only two additional filters but it may be extended to any number of filters that you pass in to the CUBEMEMBER-function. This would be the general pattern:

  1. MID(
  2.   IIf(axis(0).item(0).count > 0 AND
  3.         NOT(axis(0).item(0).item(0).hierarchy IS [Measures]),
  4.     "," + axis(0).item(0).item(0).hierarchy.currentmember.uniquename,
  5.     "")
  6. + IIf(axis(0).item(0).count > 1 AND
  7.         NOT(axis(0).item(0).item(1).hierarchy IS [Measures]),
  8.     "," + axis(0).item(0).item(1).hierarchy.currentmember.uniquename,
  9.     "")
  10. + IIf(axis(0).item(0).count > n AND
  11.         NOT(axis(0).item(0).item(n).hierarchy IS [Measures]),
  12.     "," + axis(0).item(0).item(n).hierarchy.currentmember.uniquename,
  13.     "")
  14. , 2)

After creating this measure we can now use it in our CUBE-functions in combination with our filters and slicers:
Excel_MDX_CUBEVALUE_UniqueNames_Filter
Excel_MDX_CUBEVALUE_UniqueNames_Slicer

You may noted that I had to use CUBERANKEDMEMBER here. This is because filters and slicers always return a set and if we would pass in a set to our CUBEVALUE function a different MDX query would be generated which would not allow us to extract the single UniqueNames of the selected items using the approach above (or any other MDX I could think of). So, this approach currently only works with single selections! I hope that the Excel team will implement a native function to extract the UniqueName(s) of the selected items in the future to make this workaround obsolete!

Once we have our UniqeName(s) we can now use them in e.g. a CUBESET-function to return the Top 10 days for a given group of product (filter) and the selected year (slicer):
Excel_MDX_CUBESET_TopCount

And that’s it!

So why is this so cool?

  • It works with SSAS (multidimensional and tabular) and Power Pivot as Excel still uses MDX to query all those sources. It may also work with SAP HANA’s ODBO connector but I have not tested this yet!
  • It does not require any VBA which would not work in Excel Services – this solution does!
  • The calculation is stored within the Excel Workbook so it can be easily shared with other users!
  • There is no native Excel functionality which would allow you to create a simple Top 10 report which works with filters and slicers as shown above or any more complex dynamic report/dashboard with any dynamic filtering.

So no more to say here – Have fun creating your interactive Excel web dashboards!

Download sample Workbook: Samples.xlsx

Note: You may also rewrite any TOPCOUNT expression and use the 4th and 5h parameter of the CUBESET-function instead. This is more native and does not require as much MDX knowledge:Excel_MDX_CUBESET_TopCount_Native
However, if you are not familiar with MDX, I highly recommend to learn it before you write any advanced calculations as show above as otherwise the results might be a bit confusing in the beginning! Especially if you filter and use TOPCOUNT on the same dimension!

Dynamic ABC Analysis in Power Pivot using DAX – Part 2

Almost two years ago I published the first version of an Dynamic ABC Analysis in Power Pivot and by now it is the post with the most comments among all my blog posts. This has two reason:
1) the formula was quite complex and not easy to understand or implement
2) the performance was not really great with bigger datasets

When the first of those comments flew in, I started investigating into a new, advanced formula. At about the same time Marco Russo and Alberto Ferrari published their ABC Classification pattern – a static version using calculated columns – at www.daxpatterns.com. When I had my first dynamic version ready I sent it to Marco and Alberto and asked if they are interested in the pattern and if I can publish it on their website. Long story short – this week the new pattern was released and can now be found here:

ABC Classification – Dynamic

It got some major performance improvements and was also designed towards reusability with other models. The article also contains some detailed explanations how the formula actually works but its still very hard DAX which will take some time to be fully understood. The pattern also features some extended versions to address more specific requirements but I think its best to just read the article on your own.

Hope you enjoy it!

Happy New Year! – How about some Conferences?

2015 just started – and it is a quickstart in terms of SQL Server conferences! There is quite a lot of upcoming conferences and I am happy that most of them are in Europe.

Below you can find a short overview followed by some more details:

What? When? Where? Am I there?
German SQL Server Conference February 3-5th Darmstadt, Germany Yes, I am speaking!
SQL Saturday Vienna February 27-28th Vienna, Austria Maybe, but not speaking
SQLRally Nordic Copenhagen March 2-4th Copenhagen, Denmark Yes, I am speaking!
SQLBits XIV Superheroes March 4-7th London, UK Yes, I am speaking!

UPDATE 2014-01-13:
I just received a confirmation that my session “Power BI on SAP HANA” was accepted for SQL Bits XIV!
It’s also the first time that I will do a session together with a Co-Speaker, my colleague Markus Begerow (b)

 

It starts with the German SQL Server Conference 2015 on February 3-5th
728x90_SQL_Server_Konferenz_EN
I am also very happy that my session “Load testing Analysis Services” was selected and I will be speaking the second time in a row at this awesome conference which is also the biggest German SQL Server Conference out there. And now worries, there are also a lot of English session in case you do not speak German Winking smile

Up next is a true marathon of conferences starting with the SQL Saturday #374 in Vienna on 28th of February.
SQLSaturday_374_Vienna
This year also featuring Pre-Cons on 27th of February!
Reza Rad (t, b) and Leila Etaati (t, b) will be speaking on “Power BI from Rookie to Rockstar” and Dejan Sarka (t, b) on “Advanced Data Modeling
Last year it was fully booked pretty soon and we had a long waiting list so better do your reservation now!
The schedule can be found here and features 20 sessions of well-know SQL Server professionals.

Directly after the SQL Saturday in Vienna the PASS SQLRally Nordic opens its doors in Copenhagen again on March 2-4.
SQLRally_2015_Copenhagen_Banner
The official schedule was just released today and can be found here (full PDF)!
I will deliver my session “Deep-Dive to Analysis Services Security” on Wednesday 4th.

Last but definitely not least is SQLBits Conferences, Europe’s biggest SQL Server conference, which is taking place the 13st time now on March 4-7 in London. (don’t get confused just because its SQL Bits XIV, Microsoft also skipped Windows 9 Open-mouthed smile). This year its all about Superheroes and a lot of SQL Server Superheroes will be there!
SQLBits_Superheroes
A schedule is not available yet but will be made public within the next days I guess so stay tuned!

UPDATE 2014-01-13:
The official schedule will soon be available here. Our Session is very likely to be on Friday.

Hope to see you there!

Events-In-Progress for Time Periods in DAX

Calculating the Events-In-Progress is a very common requirement and many of my fellow bloggers like Chris Webb, Alberto Ferrari and Jason Thomas already blogged about it and came up with some really nice solutions. Alberto also wrote a white-paper summing up all their findings which is a must-read for every DAX and Tabular/PowerPivot developer.
However, I recently had a slightly different requirement where I needed to calculate the Events-In-Progress for Time Periods – e.g. the Open Orders in a given month – and not only for a single day. The calculations shown in the white-paper only work for a single day so I had to come up with my own calculation to deal with this particular problem.

Before we can start we need to identify which orders we actually want to count if a Time Period is selected. Basically we have to differentiate between 6 types of Orders for our calculation and which of them we want to filter or not:
Overview_EventsInProgress_TimeFrame

Order Definition
Order1 (O1) Starts before the Time Period and ends after it
Order2 (O2) Starts before the Time Period and ends in it
Order3 (O3) Starts in the Time Period and ends after it
Order4 (O4) Starts and ends in the Time Period
Order5 (O5) Starts and ends after the Time Period
Order6 (O6) Starts and ends before the Time Period

For my customer an order was considered as “open” if it was open within the selected Time Period, so in our case we need to count only Orders O1, O2, O3 and O4. The first calculation you would usually come up with may look like this:

  1. [MyOpenOrders_FILTER] :=
  2. CALCULATE (
  3.     DISTINCTCOUNT ( ‘Internet Sales’[Sales Order Number] ),
  4.     FILTER (
  5.         ‘Internet Sales’,
  6.         ‘Internet Sales’[Order Date]
  7.             <= CALCULATE ( MAX ( ‘Date’[Date] ) )
  8.     ),
  9.     FILTER (
  10.         ‘Internet Sales’,
  11.         ‘Internet Sales’[Ship Date]
  12.             >= CALCULATE ( MIN ( ‘Date’[Date] ) )
  13.     )
  14. )
[MyOpenOrders_FILTER] :=
CALCULATE (
    DISTINCTCOUNT ( 'Internet Sales'[Sales Order Number] ), -- our calculation, could also be a reference to measure
    FILTER (
        ‘Internet Sales’,
        ‘Internet Sales’[Order Date]
            <= CALCULATE ( MAX ( 'Date'[Date] ) )
    ),
    FILTER (
        ‘Internet Sales’,
        ‘Internet Sales’[Ship Date]
            >= CALCULATE ( MIN ( 'Date'[Date] ) )
    )
)

We apply custom filters here to get all orders that were ordered on or before the last day and were also shipped on or after the first day of the selected Time Period. This is pretty straight forward and works just fine from a business point of view. However, performance could be much better as you probably already guessed if you read Alberto’s white-paper.

So I integrate his logic into my calculation and came up with this formula (Note that I could not use the final Yoda-Solution as I am using a DISTINCTCOUNT here):

  1. [MyOpenOrders_TimePeriod] :=
  2. CALCULATE (
  3.     DISTINCTCOUNT ( ‘Internet Sales’[Sales Order Number] ),
  4.     GENERATE (
  5.         VALUES ( ‘Date’[Date] ),
  6.         FILTER (
  7.             ‘Internet Sales’,
  8.             CONTAINS (
  9.                 DATESBETWEEN (
  10.                     ‘Date’[Date],
  11.                     ‘Internet Sales’[Order Date],
  12.                     ‘Internet Sales’[Ship Date]
  13.                 ),
  14.                 [Date], ‘Date’[Date]
  15.             )
  16.         )
  17.     )
  18. )

To better understand the calculation you may want to rephrase the original requirement to this: “An open order is an order that was open on at least one day in the selected Time Period”.

I am not going to explain the calculations in detail again as the approach was already very well explained by Alberto and the concepts are the very same.

An alternative calculation would also be this one which of course produces the same results but performs “different”:

  1. [MyOpenOrders_TimePeriod2] :=
  2. CALCULATE (
  3.     DISTINCTCOUNT ( ‘Internet Sales’[Sales Order Number] ),
  4.     FILTER (
  5.         GENERATE (
  6.             SUMMARIZE (
  7.                 ‘Internet Sales’,
  8.                 ‘Internet Sales’[Order Date],
  9.                 ‘Internet Sales’[Ship Date]
  10.             ),
  11.             DATESBETWEEN (
  12.                 ‘Date’[Date],
  13.                 ‘Internet Sales’[Order Date],
  14.                 ‘Internet Sales’[Ship Date]
  15.             )
  16.         ),
  17.         CONTAINS ( VALUES ( ‘Date’[Date] ), [Date], ‘Date’[Date] )
  18.     )
  19. )

I said it performs “different” as for all DAX calculations, performance also depends on your model, the data and the distribution and granularity of the data. So you should test which calculation performs best in your scenario. I did a simple comparison in terms of query performance for AdventureWorks and also my customer’s model and results are slightly different:

Calculation (Results in ms)   AdventureWorks   Customer’s Model
[MyOpenOrders_FILTER]                   58.0              1,094.0
[MyOpenOrders_TimePeriod]                   40.0                  390.8
[MyOpenOrders_TimePeriod2]                   35.5                  448.3

As you can see, the original FILTER-calculation performs worst on both models. The last calculation performs better on the small AdventureWorks-Model whereas on my customer’s model (16 Mio rows) the calculation in the middle performs best. So it’s up to you (and your model) which calculation you should prefer.

The neat thing is that all three calculations can be used with any existing hierarchy or column in your Date-table and of course also on the Date-Level as the original calculation.

Download: Events-in-Progress.pbix

SSAS Dynamic Named Sets in Calculated Members

Recently at one of my customers we were doing some performance tuning on a SSAS Multidimensional cube and I came across something I have not seen before yet in my lifetime as a SSAS developer. Even the simplest select queries where running for several seconds, even if executed on warm cache! So first I thought this may be related to some SCOPE assignments overwriting values etc. Using my MdxScriptDebugger I could easily identify that none of the SCOPE assignments had any impact on the actual result of the query. However, what the result of the MdxScriptDebugger trace also revealed was that the query-time increased after a calculated member was created. Important to note here is that this specific member was NOT used in the test-query at all! Investigating into the calculated member turned out that it was using a Dynamic Named Set.

Dynamic Named Sets are usually used if you want a set to be re-evaluated in the current context opposed to Static Named Sets which are only evaluated once during creation. For example you can create a Dynamic Named Set for your Top 10 customers. Changing the Year or Country would cause a re-evaluation of the set and different customers are returned for your filter selections. These type of calculated sets is usually not a problem.

Another reason to used Dynamic Named Sets is to deal with Sub-Selects. Some client tools, foremost Microsoft Excel Pivot Tables, use Sub-Selects to deal with multiple selections on the same attribute. Lets do a little example on how to use Dynamic Named Sets here. Assume you need to calculate the average yearly sales for the selected years. This may sound very trivial at first sight but can be very tricky. In order to calculated the yearly average we first need to calculated how many years are in the currently selected:

  1. CREATE MEMBER CURRENTCUBE.[Measures].[CountYears_EXISTING] AS (
  2. COUNT(EXISTING [Date].[Calendar Year].[Calendar Year].members)
  3. );

However, this does not work if Sub-Selects are used in the query:
PivotTable_SSAS_DynamicNamedSet_Wrong
The calculated member returns “6” (the overall number of years) instead of “2” (the actually selected number of years). The issue here is that the calculation is not aware of any Sub-Select or filters within the Sub-Select as it is executed only outside of the Sub-Select.

To work around this issue you can create a Dynamic Name Set and refer to it in your calculated member:

  1. CREATE DYNAMIC SET [ExistingYears] AS {
  2. EXISTING [Date].[Calendar Year].[Calendar Year].members
  3. };
  4.  
  5. CREATE MEMBER CURRENTCUBE.[Measures].[CountYears_DynamicSet] AS (
  6. COUNT([ExistingYears])
  7. );

PivotTable_SSAS_DynamicNamedSet_Working

Now we get the correct results for our Count of Years calculation and could simply divide our Sales Amount by this value to get average yearly sales. The reason for this is that Dynamic Named Sets are also evaluated within the Sub-Select and therefore a COUNT() on it returns the correct results here.

So this technique is quite powerful and is also the only feasible workaround to deal with this kind of problem. But as I initially said, this can also cause some performance issues!

To illustrate this issue on Adventure Works simply add these two calculations to your MDX Script:

  1. CREATE DYNAMIC SET [ExpensiveSet] AS {
  2. Exists(
  3.     [Product].[Category].[Category].members,
  4.     Filter(
  5.         Generate(
  6.             Head([Date].[Calendar].[Month].MEMBERS, 30),
  7.             CrossJoin(
  8.                 {[Date].[Calendar].CURRENTMEMBER},
  9.                 Head(
  10.                     Order(
  11.                         [Customer].[Customer].[Customer].MEMBERS,
  12.                         [Measures].[Internet Sales Amount],
  13.                         BDESC),
  14.                     10000))),
  15.         [Measures].[Internet Order Quantity] > -1
  16.     ),
  17.     'Internet Sales'
  18. )
  19. };
  20.  
  21. CREATE MEMBER CURRENTCUBE.[Measures].[UnusedCalc] AS (
  22. COUNT([ExpensiveSet])
  23. );

The [ExpensiveSet] is just a Dynamic Named Set which needs some time to be evaluated and the [UnusedCalc] measure does a simple count over the [ExpensiveSet]. Having these calculations in place you can now run any query against your cube and will notice that even the simplest query now takes some seconds to execute even if the new calculated member is not used:

  1. SELECT
  2. [Measures].[Internet Sales Amount] ON 0
  3. FROM [Adventure Works]

I am quite sure that this behavior is related to how Dynamic Named Sets are evaluated and why they also work for Sub-Selects. However, I also think that calculations that are not used in a query should not impact the results and/or performance of other queries!

I just created a bug on Connect in case you want to vote it:
https://connect.microsoft.com/SQLServer/feedback/details/1049569

I know that Dynamic Named Sets in combination with calculated members is a not a widely used technique as I guess most developers are not even aware of its power. For those who are, please keep in mind that these kind of calculations get evaluated for every query which can be crucial if your Dynamic Named Set is expensive to calculate! This has also impact on meta-data queries!