MDX Script Debugger – Beta-Release

I recently built a tool which should help to debug the MDX scripts of an Analysis Services cube in order to track down formula engine related performance issues of a cube. As you probably know most of the performance issues out there are usually caused by poorly or wrong written MDX scripts. This tool allows you to execute a reference query and highlights the MDX script commands that are effectively used when the query is run. It provides the overall timings and how long each additional MDX script command extended the execution time of the reference query. The results can then either be exported to XML for further analysis in e.g. Power BI or a customized version of the MDX script can be created and used to run another set of tests.

The tool is currently in a beta state and this is the first official release – and also my first written tool that I share publicly so please don’t be too severe with your feedback Open-mouthed smile  – just joking every feedback is good feedback!

Below is a little screenshot which shows the results after  the reference query is executed. The green lines are effectively used by the query whereas the others do not have any impact on the values returned by the query.

ResultsView

A list of all features, further information and also the source code can be found at the project page on codeplex:
https://mdxscriptdebugger.codeplex.com/
Also the download is available from there:
https://mdxscriptdebugger.codeplex.com/releases

Looking forward to your feedback and hope it helps you to track down performance issues of your MDX Scripts!

Configure HTTP access to Analysis Services using PowerShell

Recently I had to setup an Analysis Services cube and expose it to external users. This is usually done by using Internet Information Server (IIS) and creating a new WebSite which hosts msmdpump.dll. This DLL more or less wraps XMLA commands inside HTTP thus allowing external users to access the cube via HTTP. Besides Windows Authentication this setup also allows Basic Authentication and so external users can simply connect by specifying Username and Password in e.g. Excel when connecting to the cube:
ExcelConnectionDialog

There are already a lot of whitepapers out there which describe how to set things up correctly. Here are just some examples:
– MSDN: http://msdn.microsoft.com/en-us/library/gg492140.aspx
– MSBI Academy (great video!): http://msbiacademy.com/?p=5711 by Rob Kerr

They provide very useful information and you should be familiar with the general setup before proceeding here or using the final PowerShell script.

The PowerShell script basically performs the following steps:

  1. Create a local folder as base for your WebSite in IIS
  2. Copy SSAS ISAPI files (incl. msmdpump.dll) to the folder
  3. Create and Configure an IIS AppPool
  4. Create and Configure a IIS WebSite
  5. Add and enable an ISAPI entry for msmdpump.dll
  6. Configure Authentication
  7. Configure Default Document
  8. Update connection information to SSAS server

I tested it successfully with a clean installation of IIS 8.0 (using applicationhost.config.clean.install). In case you already have other WebSites running you may still consider doing the steps manually or adopting the script if necessary. The script is written not to overwrite any existing Folders, WebSites, etc. but you never know.

So here is my final script:

  1. #Import Modules
  2. Import-Module WebAdministration
  3.  
  4. # change these settings
  5. $iisSiteName = "OLAP"
  6. $iisPort = "8000"
  7. $olapServerName = "server\instance"
  8.  
  9. # optionally also change these settings
  10. $isapiFiles = "c:\Program Files\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\bin\isapi\*"
  11. $iisAbsolutePath = "C:\inetpub\wwwroot\" + $iisSiteName
  12. $iisAppPoolName = $iisSiteName + "_AppPool"
  13. $iisAppPoolUser = "" #default is ApplicationPoolIdentity
  14. $iisAppPoolPassword = ""
  15. $iisAuthAnonymousEnabled = $false
  16. $iisAuthWindowsEnabled = $true
  17. $iisAuthBasicEnabled = $true
  18. $olapSessionTimeout = "3600" #default
  19. $olapConnectionPoolSize = "100" #default
  20.  
  21. if(!(Test-Path $iisAbsolutePath -pathType container))
  22. {
  23.     #Creating Directory
  24.     mkdir $iisAbsolutePath  | Out-Null
  25.  
  26.     #Copying Files
  27.     Write-Host -NoNewline "Copying ISAPI files to IIS Folder … "
  28.     Copy -Path $isapiFiles -Destination $iisAbsolutePath -Recurse
  29.     Write-Host " Done!" -ForegroundColor Green
  30. }
  31. else
  32. {
  33.     Write-Host "Path $iisAbsolutePath already exists! Please delete manually if you want to proceed!" -ForegroundColor Red
  34.     Exit
  35. }
  36.  
  37. #Check if AppPool already exists
  38. if(!(Test-Path $("IIS:\\AppPools\" + $iisAppPoolName) -pathType container))
  39. {
  40.     #Creating AppPool
  41.     Write-Host -NoNewline "Creating ApplicationPool $iisAppPoolName if it does not exist yet … "
  42.     $appPool = New-WebAppPool -Name $iisAppPoolName
  43.     $appPool.managedRuntimeVersion = "v2.0"
  44.     $appPool.managedPipelineMode = "Classic"
  45.  
  46.     $appPool.processModel.identityType = 4 #0=LocalSystem, 1=LocalService, 2=NetworkService, 3=SpecificUser, 4=ApplicationPoolIdentity
  47.     #For details see http://www.iis.net/configreference/system.applicationhost/applicationpools/add/processmodel
  48.  
  49.     if ($iisAppPoolUser -ne "" -AND $iisAppPoolPassword -ne "") {
  50.         Write-Host
  51.         Write-Host "Setting AppPool Identity to $iisAppPoolUser"
  52.         $appPool.processmodel.identityType = 3
  53.         $appPool.processmodel.username = $iisAppPoolUser
  54.         $appPool.processmodel.password = $iisAppPoolPassword
  55.     }
  56.     $appPool | Set-Item
  57.     Write-Host " Done!" -ForegroundColor Green
  58. }
  59. else
  60. {
  61.     Write-Host "AppPool $iisAppPoolName already exists! Please delete manually if you want to proceed!" -ForegroundColor Red
  62.     Exit
  63. }
  64.  
  65. #Check if WebSite already exists
  66. $iisSite = Get-Website $iisSiteName
  67. if ($iisSite -eq $null)
  68. {
  69.     #Creating WebSite
  70.     Write-Host -NoNewline "Creating WebSite $iisSiteName if it does not exist yet … "
  71.     $iisSite = New-WebSite -Name $iisSiteName -PhysicalPath $iisAbsolutePath -ApplicationPool $iisAppPoolName -Port $iisPort
  72.     Write-Host " Done!" -ForegroundColor Green
  73. }
  74. else
  75. {
  76.     Write-Host "WebSite $iisSiteName already exists! Please delete manually if you want to proceed!" -ForegroundColor Red
  77.     Exit
  78. }
  79.  
  80. #Ensuring ISAPI CGI Restriction entry exists for msmdpump.dll
  81. if ((Get-WebConfiguration "/system.webServer/security/isapiCgiRestriction/add[@path='$iisAbsolutePath\msmdpump.dll']") -eq $null)
  82. {
  83.     Write-Host -NoNewline "Adding ISAPI CGI Restriction for $iisAbsolutePath\msmdpump.dll … "
  84.     Add-WebConfiguration "/system.webServer/security/isapiCgiRestriction" -PSPath:IIS:\\  -Value @{path="$iisAbsolutePath\msmdpump.dll"}
  85.     Write-Host " Done!" -ForegroundColor Green
  86. }
  87. #Enabling ISAPI CGI Restriction for msmdpump.dll
  88. Write-Host -NoNewline "Updating existing ISAPI CGI Restriction … "
  89. Set-WebConfiguration "/system.webServer/security/isapiCgiRestriction/add[@path='$iisAbsolutePath\msmdpump.dll']/@allowed" -PSPath:IIS:\\ -Value "True"
  90. Set-WebConfiguration "/system.webServer/security/isapiCgiRestriction/add[@path='$iisAbsolutePath\msmdpump.dll']/@description" -PSPath:IIS:\\ -Value "msmdpump.dll for SSAS"
  91. Write-Host " Done!" -ForegroundColor Green
  92.  
  93.  
  94. #Adding ISAPI Handler to WebSite
  95. Write-Host -NoNewline "Adding ISAPI Handler … "
  96. Add-WebConfiguration /system.webServer/handlers -PSPath $iisSite.PSPath -Value @{name="msmdpump"; path="*.dll"; verb="*"; modules="IsapiModule"; scriptProcessor="$iisAbsolutePath\msmdpump.dll"; resourceType="File"; preCondition="bitness64"}
  97. Write-Host " Done!" -ForegroundColor Green
  98.  
  99. #enable Windows and Basic Authentication
  100. Write-Host -NoNewline "Setting Authentication Providers … "
  101. #need to Unlock sections first
  102. Set-WebConfiguration /system.webServer/security/authentication/anonymousAuthentication  MACHINE/WEBROOT/APPHOST -Metadata overrideMode -Value Allow
  103. Set-WebConfiguration /system.webServer/security/authentication/windowsAuthentication  MACHINE/WEBROOT/APPHOST -Metadata overrideMode -Value Allow
  104. Set-WebConfiguration /system.webServer/security/authentication/basicAuthentication  MACHINE/WEBROOT/APPHOST -Metadata overrideMode -Value Allow
  105.  
  106. Set-WebConfiguration /system.webServer/security/authentication/anonymousAuthentication -PSPath $iisSite.PSPath -Value @{enabled=$iisAuthAnonymousEnabled}
  107. Set-WebConfiguration /system.webServer/security/authentication/windowsAuthentication -PSPath $iisSite.PSPath -Value @{enabled=$iisAuthWindowsEnabled}
  108. Set-WebConfiguration /system.webServer/security/authentication/basicAuthentication -PSPath $iisSite.PSPath -Value @{enabled=$iisAuthBasicEnabled}
  109. Write-Host " Done!" -ForegroundColor Green
  110.  
  111. #Adding Default Document
  112. Write-Host -NoNewline "Adding Default Document msmdpump.dll … "
  113. Add-WebConfiguration /system.webServer/defaultDocument/files -PSPath $iisSite.PSPath -atIndex 0 -Value @{value="msmdpump.dll"}
  114. Write-Host " Done!" -ForegroundColor Green
  115.  
  116. #Updating OLAP Server Settings
  117. Write-Host -NoNewline "Updating OLAP Server Settings … "
  118. [xml]$msmdpump = Get-Content "$iisAbsolutePath\msmdpump.ini"
  119. $msmdpump.ConfigurationSettings.ServerName = $olapServerName
  120. $msmdpump.ConfigurationSettings.SessionTimeout = $olapSessionTimeout
  121. $msmdpump.ConfigurationSettings.ConnectionPoolSize = $olapConnectionPoolSize
  122. $msmdpump.Save("$iisAbsolutePath\msmdpump.ini")
  123. Write-Host " Done!" -ForegroundColor Green
  124.  
  125. Write-Host "Everything done! "
  126. Write-Host "The SSAS server can now be accessed via http://$env:computername`:$iisPort"

 

The script can also be downloaded here.

The process of setting up HTTP connectivity is the same for Analysis Services Multidimensional and Tabular so the script works for both scenarios, just change the server name accordingly.

Using Self-Signed Certificates for your Power BI DMG

In my previous post I showed how to setup a Power BI Data Management Gateway on a non-domain Azure VM. The final setup is also the starting-point for this post where we will use self-signed certificates to use HTTPS/SSL connectivity to our DMG. So make sure that you have all prerequisites up and running before you continue reading.

Basically, the process to switch to HTTPS is pretty straight forward. Simply open your DMG, go to Settings and change from HTTP to HTTPS. Finally select your certificate and you are ready to go!
This may work in a corporate hybrid environment where everything is set up correctly but for a non-Azure VM this is a bit more complicated and this is what this post is about.

Besides the initial setup from my previous post there are some steps you need to do in advance in order for HTTPS connectivity to work:
1) Open the port that the DMG HTTPS connection uses in your Windows Firewall (default is port 8050)
2) Create an Endpoint for your Azure VM for the very same port
3) Create a self-signed certificate to be used to establish a secure connection

You should already be familiar with 1) and 2) as you needed to do the same steps also for your HTTP port of your DMG (default is port 8051 here). To create a self-signed certificate you can simply follow the steps as described here. The important thing here is to use the full qualified server name: CN=myserver.cloudapp.net
This is very import, otherwise the final connection will not work!

Your MakeCert-command should look similar to this:
makecert -r -pe -n “CN=myserver.cloudapp.net” -b 01/01/2000 -e 01/01/2050 -eku 1.3.6.1.5.5.7.3.1 -ss my -sr localMachine -sky exchange -sp “Microsoft RSA SChannel Cryptographic Provider” -sy 12

After you run the command the new certificate is automatically added to your users personal certificates and can be used when setting up HTTPS connectivity for your DMG:
SetupDMG

Once you click [OK] it takes some time (~1 Minute) until everything is updated and HTTPS connectivity can be used. Now you can use Excel and Power Query to search for your data sources that are published via OData. You will find all of them but as soon as you try to load the data you will receive the following error:
ErrorPQ

That’s a bit surprising as the DMG is configured correctly using HTTPS and the very same OData feed worked just fine with HTTP. But here comes the error in my thinking that I was not aware of before talking to Benjamin Tang and Samuel Zhang from the product team. Until that point I always thought that the data is load through the cloud and there is no direct connection from my client to the server:
WrongConnectivityThought
But this is not how it works!

What actually happens in the background is that the request to the Power BI OData service gets redirected to the server and the client connects directly to the server:
ActualConnection

And this is also where our PQ error originates as the certificate used is not a trusted certificate on the client. In order to make it a trusted certificate you need to install it on the client. This can be done by following these steps:
1) Launch Internet Explorer using “Run as Administrator”
    (I’m serious here, this only works with IE but not with e.g. Chrome!)
2) navigate to https://myserver.cloudapp.net:8050 (or whatever servername/port you used)
3) continue to the website and ignore the certificate error
4) press [Cancel] at the popup the asks for credentials
5) now click on the “Certificate error” in the menu bar and press “View certificates”
ViewCertificate
6) Now install the certificate:
InstallCertificate
(Please note that this option is only available if you are using Internet Explorer launched as Administrator!!!)
7) select the location where you want to store the certificate (Current User or Local Machine depending whether it should be installed for you only or for all users)
8) whichever storage location you used, just make sure that you place the certificate in the “Trusted Root Certification Authorities” on the next page:
InstallCertificateStore

Once you have installed the certificate to your Trusted Root Certificate Authorities store the Power Query connections works again but now it is using HTTPS!

Of course this solution is only for demo and testing purposes, in a real world scenario you would already have your certificates in place and everything should indeed work out-of-the-box.

Using Power BI DMG on Non-Domain Azure VMs – August 2014 Update

In one of my recent posts I explained how to use the Power BI Data Management Gateway to access data hosted in a SQL Server running on an Azure VM. At the time of writing that post the steps to establish connectivity were not quite intuitive. With the latest Update of the Data Management Gateway (Version 1.2.5303.1 and later) things got a bit easier. However, there is still a little thing that you have to configure to make everything work smoothly. First of all, I highly recommend you to read my first post on this topic to fully understand the actual issue and why it does not work out-of-the-box.

When creating a new Data Source the DMG has to be reachable from the machine on which the Data Source Manager (the Click-Once application where you enter your SQL credentials) is executed. The hostname is derived from the DMG and for Azure VMs this does by default not reflect the hostname under which the VM is reachable from public. The hostname would be “MyServer” whereas the public DNS name is “MyServer.cloudapp.net”. To check what hostname the DMG is using you can execute the following Power Shell command:

  1. [System.Net.Dns]::GetHostEntry("localhost")

In order to change this hostname you can either join the VM to a domain (which is not what we want to do here) or use the following approach:

Open the System settings of your server:
SystemSettings_default
You will notice that both, “Computer name” and “Full computer name” show the same name, and both without the suffix “.cloudapp.net”. In order to change this we need to click the “Change settings” button right next to the names to open the System Properties:
SystemProperties

Again, click [Change …] to open the computers domain settings:
DomainSettings
As you can see, the “Full computer name” does not show our required suffix “.cloudapp.net” yet. We can change this in the dialog available via the [More …] Button:
DNS_Settings
Here we can set our “Primary DNS suffix” – we set it to “cloudapp.net” (without leading dot) to reflect our public DNS name.

By clicking [OK] on all open windows you will see the new full name “MyServer.cloudapp.net” now being used as “Full computer name” everywhere. Also our Power Shell command from above now shows the correct hostname. Note that this change also requires a reboot of the VM.

Once the machine is rebooted and DMG is running again you can now use any client machine to create your Data Source which was previously only possible from the server directly and required a RDP connection. Also HTTPS connectivity with self-signed certificates works with this approach which I will show in one of my next posts – so stay tuned!

Hiding Dimension Details in Analysis Services

An Analysis Services cube is usually accessed by a wide variety of people all of them having different roles in the business. For example Product Managers, Sales Representatives, Key Account Managers and so on. Sometimes it is necessary to hide detailed information of a given dimension or attribute from a certain user or role. In this post I will show you how this can be accomplished.

Lets take a simple example and assume we have the role “ProductManager” which must not be allowed to see any specific details of a single customer but must be able to see aggregation levels above the customer like Gender, Education, etc.

This can be accomplished quite easily by changing the security settings of our Product Manager role. We navigate to “Dimension Data” > Dimension “Customer” > Attribute “Customer”.
Next go to the “Advanced” tab and manually specify your Allowed Set as:
{[Customer].[Customer].[All Customers]}
Role_ProductManager

One important thing here is to uncheck “Enable Visual Totals” as shown above, otherwise you will not see any data at all! An end-user which owns the role ProductManager would see the following result in Excel:
PivotTable_ProductManager
He sees aggregated values on Education and Gender Level but can not see details of any single customer.

A scenario that’s a bit more complex to achieve is to hide not only one attribute but all attributes of a given dimension. Of course, we could use the approach as described above and do it for each and every attribute in the dimension. This would technically work, but its timely to implement and also hard to maintain (e.g. if new attributes are added). Ideally we only want to define it once in a single place and it should work for all attributes, existing and also new ones. We can make use auf Auto-Exists within our security role here. Auto-Exists basically reflect the security defined on one attribute to all other attributes in the same dimension. So if you are restricted to [Country]=”Austria” you can only see Cities in Austria and for Continents you would only see Europe but not America.

Having this in mind we could create a dummy-customer (which has no associated facts) on which we put our security. Instead of creating a dummy-customer manually in our relational DB we can also make use of the special UnknownMember which can be activated for any dimension. Doing this SSAS automatically creates a dummy-member for us which can be used for our security purposes:
Dimension_UnknownMember
Note that we have set the UnkownMemberName to “No Details Available” to inform the user that there are no details available for him/her.
The role itself would then specify the UnknownMember in the Allowed Set as:
{[Customer].[Customer].UNKNOWNMEMBER}
Role_UnknownMember
Again, make sure that “Enable Visual Totals” is unchecked!

Now our Excel looks as follows:
PivotTable_UnknownMember
(Note: “Show items with no data on rows” was enabled in PivotTable options here)

Using any attribute or hierarchy of our secured dimension in a filter would show this result:
Filter_UnkownMember

One last option you may consider is to set the Visibility of the UnknownMember to “Hidden” thus also hiding the UnknownMember and only revealing the All-members which was our initial requirement.  However, I think using the UnknownMember’s name to provide some further information for the end-user makes quite sense e.g. naming it “No Details Available due to Security Restrictions” or “Restricted by Security”.
This is of course not possible if you create your dummy-customer manually that’s why I prefer using the UnkownMember.

Note1:
During my tests I came across a bug which when you change the Visibility of the UnkownMember after you have already defined security makes you end up seeing all details. Even a redeploy or ProcessFull do not solve the problem. What you need to do is to explicitly change the definition of the role (e.g. by adding a blank to the Allowed Set) and redeploy the role afterwards.

Note2:
None of the above works for SSAS tabular as it does not have the concept of (disabled) VisualTotals which is essential for this solution to work!

Restoring a SSAS Tabular Model to Power Pivot

It is a very common scenario to create a SSAS Tabular Model out of an Power Pivot Model contained in an Excel workbook. Microsoft even created an wizard (or actually a separate Visual Studio project) that supports you doing this. Even further, this process is also a major part of Microsoft’s strategy to cover Personal BI, Team BI and Corporate BI within one technology being xVelocity. This all works just fine but there may also be scenarios where you need to do it the other way round – converting a Tabular model to Power Pivot. Several use-cases come into my mind but I am sure that the most important one is to making data available offline for e.g. sales people to take it with them on their every day work. And in this blog post I will show how this can be done!

But before taking a closer look into how this can be accomplished, lets first see how the import from Power Pivot to SSAS Tabular works. To do this start SQL Server Profiler and connect to your tabular instance. Then create a new Tabular project in Visual Studio based on an existing Power Pivot workbook. At this point you will notice a lot of events happening on our SSAS Tabular server. The most important event for us is “Command End” with the EventSubclass “9 – restore”:
RestoreFromPowerPivot

SSAS actually restores a backup from a “Model.abf” backup file which is located in our project directory that we just created:
BackupExtractedFromPowerPivot

So far so good – but where does this file come from?

Well, the origin of the file has to be our Excel workbook that we imported. Knowing that all new office formats ending with “x” (.xlsx, .docx, …) are basically ZIP files, we can inspect our original Excel workbook by simply rename it to “.zip”. This allows us to browse the Excel file structure:
ExcelUnzipped

We will find a folder called “xl” which contains a sub-folder called “model”. This folder contains one item called “item.data”. If you take a closer look at the file size you may realize that both, the “Model.abf” file that we restored and the “item.data” file from our Excel workbook have the exact same size:
FileProperties

A Coincidence? Not really!

What happens behind the scenes when you import a Power Pivot model into SSAS Tabular is that this “item.data” file gets copied into your project directory and is renamed to “Model.abf” and then restored to the SSAS Tabular workspace instance by using an standard database restore.

Having this information probably makes you think: If it works in one direction, why wouldn’t it also work the other way round? And it does!

So here are the steps that you need to do in order to restore your SSAS Tabular backup into an Excel Power Pivot workbook:

  1. Create a backup of your SSAS Tabular database and rename it to “item.data”
  2. Create an empty Excel workbook and add a simple linked table to the Excel data model (which is actually Power Pivot).
    This is necessary to tell Excel that the workbook contains a Power Pivot model which has to be loaded once the file is opened.
  3. Close the Excel workbook and rename it from “MyFile.xlsx” to “MyFile.xlsx.zip”
  4. Open the .zip-file in Windows Explorer and locate the “\xl\model\”-folder
  5. Replace the “item.data” file with the file that you created in step 1.
  6. Rename the .zip-file back to “MyFile.xlsx”
  7. Open the Excel Workbook
  8. Voilá! You can now work with the data model as with any other Power Pivot model!

I tested this with a SSAS Tabular backup from SQL Server 2012 SP1 being restored to the streamed version of Excel from Office 365 with the latest version of Power Pivot. I assume that it also works with older versions but have not tested all combinations yet.

There are also some features that will not work, for example roles. If your Tabular database contains roles you will not be able to use this approach. Excel will complain that the Power Pivot model is broken. However, other Tabular features like partitions actually work with the little limitation that you cannot change them later on in the Power Pivot model or process them separately:
PowerPivotPartitions
Another thing to note here is that only up to 3 partitions are allowed, otherwise you will get the same error as for roles. I think this is related to the limitation of 3 partitions for SQL Server Analysis Services Standard Edition as Chris Webb described here.

Besides these obvious features there are also some other cool things that you can do in Tabular which are not possible in Power Pivot. Most (or actually all) of them are accessible only by using BIDS Helper – a great THANK YOU to the developers of BIDS Helper at this point!
BIDS Helper enables you to add classical multidimensional features also to Tabular models which is not possible using standard Visual Studio only. Those include:

  • DisplayFolders
  • Translations (metadata only)
  • Actions

I tested it for DisplayFolders and Actions and both are working also in Power Pivot after the backup was restored and I further assume that all the other things will also work just fine.
Simply keep in mind that Power Pivot is basically a fully featured Analysis Services instance running within Excel!

For my (and your) convenience I also created a little PowerShell script that does all the work:

  1. # Load the assembly with the ZipFile class
  2. [System.Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem") | Out-Null
  3. # Load the assembly to access Analysis Services
  4. [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices") | Out-Null
  5. # Also install "Analysis Services PowerShell" according to http://technet.microsoft.com/en-us/library/hh213141.aspx
  6.  
  7. # INPUT-Variables, change these to match your environment
  8. $rootFolder = "D:\Test_PowerShell\"
  9. $emptyExcelFile = $rootFolder + "EmptyExcel.xlsx"
  10. $ssasServerName = "localhost\TAB2012"
  11. $ssasDatabaseName = "AdventureWorks"
  12.  
  13. # internal variables
  14. $newExcelFile = $rootFolder + $ssasDatabaseName + ".xlsx"
  15. $newExcelFileZip = $newExcelFile + ".zip"
  16. $unzipFolder = $rootFolder + "TEMP_ExcelUnzipped"
  17. $backupFile = $rootFolder + $ssasDatabaseName + ".abf"
  18. $itemDestination = $unzipFolder + "\xl\model\item.data"
  19.  
  20. # Copy the empty Excel file and rename it to ".zip"
  21. Copy-Item -Path $emptyExcelFile -Destination $newExcelFileZip
  22.  
  23. # Unzip the file using the ZipFile class
  24. [System.IO.Compression.ZipFile]::ExtractToDirectory($newExcelFileZip, $unzipFolder)
  25.  
  26. # Create a backup of the SSAS Tabular database
  27. Backup-ASDatabase -Server $ssasServerName -Name $ssasDatabaseName -BackupFile $backupFile -AllowOverwrite -ApplyCompression
  28.  
  29. # Copy the backup-file to our extracted Excel folder structure
  30. Copy-Item -Path $backupFile -Destination $itemDestination -Force
  31.  
  32. # Check if the target file exists and delete it
  33. if (Test-Path -Path $newExcelFile) { Remove-Item -Path $newExcelFile }
  34.  
  35. # Zip the folder-structure again using the ZipFile class and rename it to ".xlsx"
  36. [System.IO.Compression.ZipFile]::CreateFromDirectory($unzipFolder, $newExcelFile)
  37.  
  38. # Cleanup the unecessary files
  39. Remove-Item -Path $unzipFolder -Recurse
  40. Remove-Item -Path $backupFile
  41. Remove-Item -Path $newExcelFileZip

The last thing to mention here is that I don’t know if this is officially supported in any way by Microsoft – actually I am pretty sure it is not – so watch out what you are doing and don’t complain if something is not working as expected.

Analysis Services Security: Multiple Roles in Tabular vs. Multidimensional – Part 2

In one of my recent posts I highlighted how multiple security roles are handled in tabular opposed to multidimensional Analysis Services models. In this post I will go into more detail and focus on how the security is evaluated for both models.

I would like to continue using the same example as in the last post with the following roles:
“Bikes” – is restricted to [ProductCategory] = “Bikes”
“Brakes” – is restricted to [ProductSubCategory] = “Brakes”
“DE” – is restricted to [Country] = “DE”

I used the following MDX query to test both, the tabular and the multidimensional model:

  1. SELECT
  2. NON EMPTY {[Geography].[Country Region Name].[Country Region Name].MEMBERS} ON 0,
  3. NON EMPTY {[Product].[Product Category Name].[Product Category Name].MEMBERS} ON 1
  4. FROM [Model]
  5. WHERE (
  6. [Measures].[Reseller Total Sales]
  7. )

The last thing I mentioned was how the combination of roles “Bikes” and “DE” could be expressed in SQL:

  1. SELECT
  2.     [ProductCategory],
  3.     [Country],
  4.     SUM([Reseller Sales])
  5. FROM <table>
  6. WHERE [ProductCategory] = ‘Bikes’
  7.     OR [Country] = ‘Germany’
  8. GROUP BY
  9.     [Product Category],
  10.     [Country]

 

When running the previous MDX query on our tabular model using roles “Bikes” and “DE” we will see a very similar query being executed against our xVelocity Storage Engine:
(Note: There are also some SE engine queries executed to get the elements for rows and columns but the query below is the main query)

  1. SELECT
  2. [Geography_1fa13899-0770-4069-b7cb-5ddf22473324].[EnglishCountryRegionName],
  3. [Product_11920c93-05ae-4f1c-980e-466dfbcfca2a].[CalculatedColumn1 1],
  4. SUM([Reseller Sales_fc635e72-28dc-4156-80d5-43b805f8df1c].[SalesAmount])
  5. FROM [Reseller Sales_fc635e72-28dc-4156-80d5-43b805f8df1c]
  6. LEFT OUTER JOIN [Reseller_d52b9c6f-8d2d-4e23-ae4c-2fc57c1d968a]
  7.         ON [Reseller Sales_fc635e72-28dc-4156-80d5-43b805f8df1c].[ResellerKey]
  8.             =[Reseller_d52b9c6f-8d2d-4e23-ae4c-2fc57c1d968a].[ResellerKey]
  9. LEFT OUTER JOIN [Geography_1fa13899-0770-4069-b7cb-5ddf22473324]
  10.     ON [Reseller_d52b9c6f-8d2d-4e23-ae4c-2fc57c1d968a].[GeographyKey]
  11.         =[Geography_1fa13899-0770-4069-b7cb-5ddf22473324].[GeographyKey]
  12. LEFT OUTER JOIN [Product_11920c93-05ae-4f1c-980e-466dfbcfca2a]
  13.     ON [Reseller Sales_fc635e72-28dc-4156-80d5-43b805f8df1c].[ProductKey]
  14.         =[Product_11920c93-05ae-4f1c-980e-466dfbcfca2a].[ProductKey]
  15. WHERE
  16. (COALESCE((PFDATAID( [Product_11920c93-05ae-4f1c-980e-466dfbcfca2a].[CalculatedColumn1 1] ) = 6))
  17.     OR
  18. COALESCE((PFDATAID( [Geography_1fa13899-0770-4069-b7cb-5ddf22473324].[CountryRegionCode] ) = 5)));

Well, not really readable so lets make it a bit nicer by removing those ugly GUIDs, etc:

  1. SELECT
  2.     [Geography].[EnglishCountryRegionName],
  3.     [Product].[ProductCategory],
  4.     SUM([Reseller Sales].[SalesAmount])
  5. FROM [Reseller Sales]
  6. LEFT OUTER JOIN [Reseller]
  7.     ON [Reseller Sales].[ResellerKey] = [Reseller].[ResellerKey]
  8. LEFT OUTER JOIN [Geography]
  9.     ON [Reseller].[GeographyKey] = [Geography].[GeographyKey]
  10. LEFT OUTER JOIN [Product]
  11.     ON [Reseller Sales].[ProductKey] = [Product].[ProductKey]
  12. WHERE [Product].[ProductCategory] = "Bikes"
  13. OR [Geography].[CountryRegionCode] = "Germany";

This looks very similar to our SQL query. The special thing about it is the WHERE clause which combines the restrictions of both roles using OR which is then propagated also to our [Reseller Sales] fact table and that’s the reason why we see what we want and expect to see – all sales that were either made with “Bikes” OR made in “Germany”:
MultipleRoles_tabular

Another very important thing to note and remember here is that the security restrictions get propagated into and are evaluated within the query. This is done for each and every query (!) which is usually not a problem but may become crucial if you use dynamic security.
To test this with dynamic security I introduced a new role called “CustData” which is going to replace our “Bikes” for this test. It is restricted on table ‘Product’ as:

  1. =([Product Category Name] = IF( CUSTOMDATA() = "1", "Bikes", "Clothing"))

So instead of using the connectionstring “…;Roles=Bikes,DE; …” I now used “…;Roles=CustData,DE;CustomData=1; …” which results in the exact same results of course. However, the query now changed to the following (already beautified) xVelocity query:

  1. SELECT
  2.     [Geography].[EnglishCountryRegionName],
  3.     [Product].[ProductCategory],
  4.     SUM([Reseller Sales].[SalesAmount])
  5. FROM [Reseller Sales]
  6. LEFT OUTER JOIN [Reseller]
  7.     ON [Reseller Sales].[ResellerKey] = [Reseller].[ResellerKey]
  8. LEFT OUTER JOIN [Geography]
  9.     ON [Reseller].[GeographyKey] = [Geography].[GeographyKey]
  10. LEFT OUTER JOIN [Product]
  11.     ON [Reseller Sales].[ProductKey] = [Product].[ProductKey]
  12. WHERE [Product].$ROWFILTER IN '0x000000…000000000000000000000fffff00000000000ffffffff'));
  13. OR [Geography].[CountryRegionCode] = "Germany";

Instead of using a direct filter on [ProductCategory] we now see a filter on $ROWFILTER ?!?

### ASSUMPTIONS START ###
I have to admit that I am currently not entirely sure what this means but I assume the following:
Preceding the main query another xVelocity query is executed which is important for us:

  1. SELECT
  2. [Product].[RowNumber],
  3. [Product].[ProductCategory],
  4. COUNT()
  5. FROM [Product];

This query fetches each [RowNumber] and its associated [ProductCategory]. Internally the [RowNumber] column is created for every table. This is related to the columnar storage that xVelocity uses. Elaborating this in detail would go far beyond the context of this blog post. For more details on the RowNumber-column please refer too http://msdn.microsoft.com/en-us/library/hh622558(v=office.15).aspx which describes the Excel data model which is actually Power Pivot and therefore also applies to Tabular. (In general this link contains a lot of in-depth information on the tabular data model and the columnar storage concepts!)

I further assume that our security-expression is then evaluated against this temporary table to create an bitmap index of which rows match the security-expression and which don’t. This result is then applied to our main query which using the WHERE clause  [Product].$ROWFILTER IN ‘0x0000….’
For all subsequent queries the above query on [RowNumber] and [ProductCategory] is not executed again so I assume that the bitmap index gets cached internally by Analysis Services. I further assume that if the bitmap index gets cached it is also shared between users belonging to the same role which would be similar to multidimensional models.
### ASSUMPTIONS END ###

So the important take-away for tabular is that the security gets propagated into the query and down to the fact table. Combining multiple roles on this level using OR delivers the expected results.

 

For multidimensional models this is slightly different. You can define security on either the Database Dimension (which gets inherited down to all Cube Dimension) or you can define security on the Cube Dimension directly. Defining security on the Database Dimension already makes it very clear that the security restrictions are independent of any fact table/measure group. A Database Dimension may be used in several cubes so the engine cannot know in advance which measure group to use. Security for multidimensional models is defined on the multidimensional space defined by that dimension. If one role is not restricted on a dimension at all, the user will always see the whole dimension and its associated values, even if a second role would restrict that dimension. And this causes unexpected results as the user may see the whole cube.
In terms of SQL the query could be expressed as:

  1. SELECT
  2.     [ProductCategory],
  3.     [Country],
  4.     SUM([Reseller Sales])
  5. FROM <table>
  6. WHERE ( [ProductCategory] = 'Bikes' OR 1 = 1)
  7.     OR ( 1 = 1  OR [Country] = 'Germany')
  8. GROUP BY
  9.     [Product Category],
  10.     [Country]

The left side of the inner OR statements represents the role “Bikes” whereas the right part represents the “DE” role. It should be very obvious that due to the combination of both roles you finally see everything without any restriction!

Another important thing to point out is that security for multidimensional models is only evaluated once and not for every query. So even if you have complex dynamic security its evaluation only hits you once for the first user that connects to a role and is cached for all queries of that user and also shared with other users belonging to that role.

 

I hope this gives some more insights on how tabular and multidimensional handle multiple roles and their fundamental differences!

Using Power BI Data Management Gateway on Non-Domain Azure VM

UPDATE AUGUST 2014:
There were some changes to the DMG in August 2014. Please refer to my new blog post which addresses the issues with the new version! However, I still recommend you to read this post first in order to fully understand the original issue!
The new post can be found here.

 

I am currently preparing some demos and examples for Power BI. As you can expect for demos you do not want to put too much effort in building up any infrastructure so I decided to use an Azure VM to host my SQL databases and SSAS cubes. Keeping things simple the Azure VM is not joined to a domain which is fine for SQL where I can use SQL authentication, for SSAS I use msmdpump.dll. After everything was set up I wanted to install the Data Management Gateway to expose my SQL tables via OData to Power Query and Online Search.
Bryan C. Smith recently published an article on that very same topic Creating a Demo Power BI Data Gateway using an Azure Virtual Machine but for some reasons it did not work for me. Further, as Bryan already mentions in the first paragraph, his setup is not supported and  its also a bit of a hack (modifying hosts-file, and so on).
So I started my own investigations and came up with another solution, which only uses out-of-the-box features and tools and is actually quite simple. Another thing to mention here is that it will (probably) not work for scheduled data refreshes but only for exposing the SQL database via OData and make it searchable in Power Query.
Having that said, here are the steps to follow:

1) Setup the Data Management Gateway itself on the Azure VM as described here: Create a Data Management Gateway. This should work just fine and the Gateway should be in the “Registered”-state on the Azure VM and in “Ready”-state in the Power BI Admin Center:
 AdminCenter_GW_Ready

2) Create a new Data Source on top of the previously created Gateway as described here: Create a Data Source and Enable OData Feed in Power BI Admin Center

Here you will usually receive an error when you want to enter credentials for the SQL Database:
AdminCenter_DS

By Clicking on the [credentials]-button a new window pops up. Please note that this is a click-once application that actually runs on your client and is independent of your actual browser!
DataSource_Error

If the Gateway is running on an Azure VM, or basically any machine which cannot be reached from your current client you will receive an error that a connection could not be established or something similar.
Assuming you called your Azure VM “MyCloudServer” and is perfectly reachable via “MyCloudServer.cloudapp.net” you will receive an error saying that “MyCloudServer” (without “.cloudapp.net”) could not be resolved. Which is actually true as the correct server would be “MyCloudServer.cloudapp.net”. Unfortunatelly, this server name cannot be changed anywhere as far as I know. As the name cannot be changed we need to make the name somehow “resolveable”. Bryan manually modifies the hosts file and makes “MyCloudServer” point to the public IP address of “MyCloudServer.cloudapp.net”. This should usually work just fine, but somehow did not work for me. Also the public IP address may change if you reboot your Azure VM and so you would need to modify the hosts-file again.

So these are the findings we mad so far:
– the Data Source Manager is a click-once application which runs on the client
– the client must be able to resolve “MyCloudServer”

After some thinking I ended up with the following:
The only machine in my scenario that can correctly resolve “MyCloudServer” is the Azure VM itself! So instead of running the Data Source Manager on my client I simply connected to the Power BI Admin Center from my server and repeated the steps from above there.
Now everything works fine and we can proceed:
DataSource_Success
This connectivity check is only done once and has no further impact (I am not 100% sure on this Smile ). Though, the Username and Password are stored and used for all subsequent connection through the gateway, e.g. for OData access so make sure the user has the necessary access rights.

In the next step you can select the tables and views that you want to expose:
DataSource_TablesViews

Those can then be searched and queried using Excel and Power Query from any client:
Excel_PowerQuery

And that’s it – The simple trick is to run the Power BI Admin Center from the server itself and create the data source there!

Hope this helps everyone who is dealing with the same issue or wants to setup a demo environment too.

Analysis Services Security: Multiple Roles in Tabular vs. Multidimensional

In terms of security tabular and multidimensional models of SQL Server Analysis Services are very similar. Both use Roles to handle specific security settings. Those Roles are then assigned to users and groups.  This is very trivial if a user only belongs to one Role – the user is allowed to see what is specified in the Role. But it can become very tricky if a user belongs to more than one role.

For both, tabular and multidimensional security settings of multiple roles are additive. This means that you will not see less if you are assigned multiple roles but only more. Lets assume we have the following Roles:
“Bikes” – is restricted to [ProductCategory] = “Bikes”
“Brakes” – is restricted to [ProductSubCategory] = “Brakes”
“DE” – is restricted to [Country] = “DE”

A user belonging to Roles “Bikes” and “Brakes” will see all products that belong to “Bikes” and all products that belong to “Brakes”. This is OK and returns the expected results as  both roles secure the same dimension/table. This applies to tabular and also multidimensional.

However, if roles secure different dimensions/tables it gets a bit more tricky. A user may belong to Roles “Bikes” and “DE”. For multidimensional this is a real problem as it finally result in the user seeing the whole cube! This is “by design” and can be explained as follows:
Role “Bikes” does not have any restriction on [Country] so all countries are visible, Role “DE” has no restriction on [ProductCategory] so all product categories are visible. As Roles are additive the user is allowed to see all countries and also all product categories, hence the whole cube:
MultipleRoles_multidimensional

Basically you would expect to see “Bikes”-sales for all countries and “Germany”-sales for all product categories but you end up seeing much more than this. If you have every faced this problem in real life you know that this is probably not the intended behavior your customers want to see. Usually Active Directory Groups are used and assigned to SSAS roles, so this can happen quite easily without anyone even noticing (except the user who is happy to see more Smile )!
Chris Webb wrote an excellent blog post on how to deal with those kinds of security requirements in multidimensional models here.

For tabular this behavior is somehow similar. A user belonging to both roles is also allowed to see all countries and all product categories – this is because security settings are additive, same as for multidimensional. Even though this is true in terms of the structure (rows, columns) of the query we still get a different result in terms of values!
Here is the same query on a tabular model with the same security settings:
MultipleRoles_tabular

This is exactly what we and our customers would expect to see – all sales for “Germany” and also every sale related to “Bikes”! In tabular models security applied to a given table cascades down to all related tables – in this case to our fact table. If a table is indirectly secured by different Roles which put security on different tables those restrictions are combined using OR. In terms of SQL this could be expressed as:

  1. SELECT
  2.     [ProductCategory],
  3.     [Country],
  4.     SUM([Reseller Sales])
  5. FROM <table>
  6. WHERE [ProductCategory] = 'Bikes'
  7.     OR [Country] = 'Germany'
  8. GROUP BY
  9.     [Product Category],
  10.     [Country]

Further pivoting the result would show the same as the MDX query.

Thinking back to some of my multidimensional cubes where I had to deal with multiple Roles this “slight difference” would have been really handy and would have saved me a lot of time that I put into custom security solutions using bridge tables, assemblies, etc.

In my next post I will go into much more detail on how the tabular security model works so stay tuned!

UPDATE: Part 2 can be found here

Scheduled Data Refresh of OData Sources in Power BI

With the recent public release of Power BI it is finally possible to refresh Power Pivot workbooks online.Once a workbook is uploaded to SharePoint online and “Enabled” for Power BI you can schedule an automatic data refresh for the Power Pivot model. Though, at the moment only a very limit number of data sources are supported:

  • Windows Azure SQL Database data
  • Open Data protocol (OData) feeds
  • On premises data sources that are enabled for access in Power BI for Office 365

Especially for public available data OData feeds are very popular, for example from Wikipedia. Those public data feeds usually do not require any authentication so one would expect these data sources to work flawless with a scheduled data refresh of Power BI. Well, you are wrong here!
When you create a simple Power Pivot model with one OData source to e.g. Wikipedia, publish it and setup scheduled data refresh you will receive the following error:

Refresh_Error

The error message itself does not reveal any insights on the actual error. If you are in the (more or less) lucky situation that you are also owner of that site you can go to the Power BI Admin Center to get some further details on the error:

AdminCenter_ErrorLog

By downloading the CSV or hovering over a single Message you can see the details:

  1. "Failed to find a match for the data source (connection string: data source=http://publicdata.clouddatahub.net/Web/Tables/fa9af6681cd64206b3aafe6d12408117/V1/Data;include atom elements=Auto;include expanded entities=False;integrated security=SSPI;persist security info=false;time out=600;schema sample size=25;retry count=5;retry sleep=100;keep alive=False;max received message size=4398046511104;base url=http://publicdata.clouddatahub.net/Web/Tables/fa9af6681cd64206b3aafe6d12408117/V1/Data) for the user 'Gerhard.Brueckl@XYZ.onmicrosoft.com'. The user is unauthorized or, the corresponding data source is not created. Check the user's permission to the data source or create a data source for the connection string. Tracing ID: 23f244ad-7921-48f7-b13a-ef68e8cf5503"

It says that for my user no corresponding data source exists or I do not have permissions to access it.

In our case the reason is that our user is unauthorized – the actually data source that was used must not necessarily exist in the user’s data sources (Those can be found via “My Power BI”) for scheduled data refresh to work.

So you will ask yourself what could go possibly wrong here as you are just accessing a public OData feed?!?
The reason is that in the connection string the “Integrated Security”-property is set to SSPI by default. In your local Excel/Power Pivot model this works just fine as the SSPI context can be resolved and sent to the OData feed. Sure the OData feed actually ignores this information as it is public but from an authentication point of view everything works correctly!

The problem with Scheduled Data Refresh is that SSPI simply does not work as it cannot be resolved and Power BI cannot use any service user for your request if SSPI is defined. The first thing that comes to your mind would be to simply set the authentication method to “Anonymous” which would be perfectly fine for a public data feed. However, Power Pivot does not support Anonymous authentication for OData sources:

OData_Authentication

As SSPI does not work we need to use “Basic” here and provide “User ID” and “Password”. Which UserID/Password you may ask? – and the simple answer is: “It does not matter!”
You can provide any values here, in my case I used “random” for both, User ID and Password! Another thing you need to ensure is that “Persist Securtiy Info” is set to True so your “Password” is stored in the final connection string making Power BI think that authentication is defined correctly within the connection string and Power BI does not have to do anything.

Once you changes those settings your data refresh will work like a charm:

Refresh_Success

On last thing you may realize is the “Running Time”. It takes significantly longer when doing the refresh in Power BI opposed to doing the refresh locally. Just keep that in mind, especially for bigger data feeds.

In the future I hope that Microsoft will introduce some kind of “Anonymous” authentication within the dropdown of Power Pivot or simply check at some point if the OData feed requires any authentication at all hence overwriting the authentication mode specified in the connection string when refreshing.