Since the last major update last year, Power BI offers some APIs which can be used to interact with content and also data that is stored in Power BI. Microsoft provides a good set of samples on how to use the APIs on GitHub and also a an interactive APIARY web-UI which you can use to build and test API calls on-the-fly. However, it can still be quite cumbersome as you have to deal with all the REST API calls and the returned JSON on your own. So I decided to write a little C# Wrapper where you simply pass in your Azure AD Application Client ID and you can deal with all Object of the Power BI API as they were regular C# objects.
Here is a little example on how to list all available reports and get the EmbedURL of a given tile using the PowerBIClient:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using pmOne.PowerBI; using pmOne.PowerBI.PowerBIObjects;
Console.WriteLine(“Press <Enter> to exit …”); Console.ReadLine(); } } }
As you can see, its pretty simple and very easy to use, even for non-developers. You can find all the source-code and the sample application for download below. The code as I have written it is very likely not the best code possible, but it works for my needs, is straight forward, simple and saves me a lot of work and time when dealing with the PowerBI API. Also, if the API changes, you may need to adopt the code accordingly. However, for the future I hope that Microsoft provides some metadata so that VisualStudio can build all this code automatically using e.g. Swagger. But for the time being feel free to use, improve or extend my code
The original request for this calculation came from one of my blog readers who dropped me a mail asking if it possible to calculated the Pearson Correlation Coefficient (PCC or PPMCC) in his PowerPivot model. In case you wonder what the Pearson Correlation Coefficient is and how it can be calculated – as I did in the beginning – these links What is PCC, How to calculate PCC are very helpful and also offer some examples and videos explaining everything you need to know about it. I highly recommend to read the articles before you proceed here as I will not go into the mathematical details of the calculation again in this blog which is dedicated to the DAX implementation of the PCC.
Anyway, as I know your time is precious, I will try to sum up its purpose for you: “The Pearson Correlation Coefficient calculates the correlation between two variables over a given set of items. The result is a number between -1 and 1. A value higher than 0.5 (or lower than –0.5) indicate a strong relationship whereas numbers towards 0 imply weak to no relationship.” The two values we want to correlate are our axes, whereas the single dots represent our set of items. The PCC calculates the trend within this chart represented as an arrow above.
The mathematical formula that defines the Pearson Correlation Coefficient is the following:
The PCC can be used to calculate the correlation between two measures which can be associated with the same customer. A measure can be anything here, the age of a customer, it’s sales, the number of visits, etc. but also things like sales with red products vs. sales with blue products. As you can imagine, this can be a very powerful statistical KPI for any analytical data model. To demonstrate the calculation we will try to correlate the order quantity of a customer with it’s sales amount. The order quantity will be our [MeasureX] and the sales will be our [MeasureY], and the set that we will calculate the PCC over are our customers. To make the whole calculation more I split it up into separate measures:
MeasureX := SUM(‘Internet Sales’[Order Quantity])
MeasureY := SUM(‘Internet Sales’[Sales Amount])
Based on these measures we can define further measures which are necessary for the calculation of our PCC. The calculations are tied to a set if items, in our case the single customers:
This [Pearson]-measure can then be used together with any attribute in our model – e.g. the Calendar Year in order to track the changes of the Pearson Correlation Coefficient over years: For those of you who are familiar with the Adventure Works sample DB, this numbers should not be surprising. In 2005 and 2006 the Adventure Works company only sold bikes and usually a customer only buys one bike – so we have a pretty strong correlation here. However, in 2007 they also started selling Clothing and Accessories which are in general cheaper than Bikes but are sold more often.
This has impact on our Pearson-value which is very obvious in the screenshots above.
As you probably also realized, the Grand Total of our Pearson calculation cannot be directly related to the single years and may also be the complete opposite of the single values. This effect is called Simpson’s Paradox and is the expected behavior here.
[MeasuresX] and [MeasureY] can be exchanged by any other DAX measures which makes this calculation really powerful. Also, the set of items over which we want to calculated the correlation can be exchanged quite easily. Below you can download the sample Excel workbook but also a DAX query which could be used in Reporting Services or any other tool that allows execution of DAX queries.
I am very glad that I got selected as a speaker for two upcoming SQL Saturdays in June. First there is the SQL Saturday #409 in Rheinland, Germany on June 13 and the week after the SQL Saturday #419 in Bratislava, Slovakia on June 20.
For those of you who are new to the concept of PASS SQL Saturdays, this is a series of free-of-charge events all around the globe where experienced speakers talk about all topics around the Microsoft SQL Server platform and beyond. As I said, its free, you just need to register in time in order to get a ticked so better be fast before all slots are taken!
I will do a session together with my colleague Markus Begerow (b, t) on “Power BI on SAP HANA” – two technologies I got to work a lot with recently. We are going to share our experience on how to use Power BI to extract data from SAP HANA, the different interfaces you can use and the advantages and drawbacks of each. Even tough it is considered a general sessions, we will also do a lot of hands on and elaborate on some of the technical details you need to be aware of, for both, the Power BI side and also for SAP HANA.
In Bratislava I will speak about Lessons Learned: SSAS Tabular in the real world where I will present the technical and non-technical findings I made in the past when implementing SSAS Tabular models at larger scales for various customers. I will cover the whole process from choosing SSAS Tabular as your engine (or not choosing it), things to consider during implementation and also shed some light on the administrative challenges once the solution is in production.
I think both are really interesting sessions and I would be happy to see a lot of you there and have some interesting discussions!
If you have ever tried to implement a recursive calculations in DAX similar to how you would have done it back in the good old days of MDX (see here) you would probably have come up with a DAX formula similar to the one below:
Sales ForeCast :=
NOT( ISBLANK( [Sales] )),
However, in DAX you would end up with the following error:
A circular dependency was detected: ‘Sales'[Sales ForeCast],’Sales'[Sales ForeCast].
This makes sense as you cannot reference a variable within its own definition – e.g. X = X + 1 cannot be defined from a mathematical point of view (at least according to my limited math skills). MDX is somehow special here where the SSAS engine takes care of this recursion by taking the IF() into account.
So where could you possible need a recursive calculation like this? In my example I will do some very basic forecasting based on monthly growth rates. I have a table with my actual sales and another table for my expected monthly growth as percentages. If I do not have any actual sales I want to use my expected monthly growth to calculate my forecast starting with my last actual sales:
This is a very common requirement for finance applications, its is very easy to achieve in pure Excel but very though to do in DAX as you probably realized on your own what finally led you here
In Excel we would simply add a calculation like this and propagate it down to all rows: (assuming column C contains your Sales, D your Planned Growth Rate and M is the column where the formula itself resides)
In order to solve this in DAX we have to completely rewrite our calculation! The general approach that we are going to use was already explained by Mosha Pasumansky some years back, but for MDX. So I adopted the logic and changed it accordingly to also work with DAX. I split down the solution into several steps: 1) find the last actual sales – April 2015 with a value of 35 2) find out with which value we have to multiply our previous months value to get the current month’s Forecast 3) calculate the natural logarithm (DAX LN()-function) of the value in step 2) 4) Sum all values from the beginning of time until the current month 5) Raise our sum-value from step 4) to the power of [e] using DAX EXP()-function 6) do some cosmetic and display our new value if no actual sales exist and take care of aggregation into higher levels
Note: The new Office 2016 Preview introduces a couple of new DAX functions, including PRODUCTX() which can be used to combine the Steps 3) to 5) into one simple formula without using any complex LN() and EXP() combinations.
Step 1: We can use this formula to get our last sales:
Last Sales :=
It basically checks if there are no [Sales] next month. If yes, we use the current [Sales]-value as our [Last Sales], otherwise we use a fixed value of 1 as a multiplication with 1 has no impact on the final result.
Step 2: Get our multiplier for each month:
ISBLANK( [Last Sales] ),
1 + [Planned GrowthRate],
If we do not have any [Last Sales], we use our [Planned GrowthRate] to for our later multiplication/summation, otherwise take our [Last Sales]-value.
Step 3 and 4: As we cannot use “Multiply” as our aggregation we first need to calculate the LN and sum it up from the first month to the current month:
Step 5 and 6: If there are no actual sales, we display our calculated Forecast:
Sales ForeCast :=
VALUES ('Date'[Month] ),
IF ( ISBLANK( [Sales] ),EXP( [Cumulated LN] ), [Sales] )
Note that we need to use SUMX over our Months here in order to also get correct subtotals on higher levels, e.g. Years. That’s all the SUMX is necessary for, the IF itself should be self-explaining here.
So here is the final result – check out the last column:
The calculation is flexible enough to handle missing sales. So if for example we would only have sales for January, our recursion would start there and use the [Planned GrowthRate] already to calculate the February Forecast-value:
Two weeks ago at the German SQL Server Conference 2015 I was at Peter Myer’s session about Mastering the CUBE Functions in Excel. (PS: Peter is also speaking on our upcoming SQLSaturday #374 in Vienna next week and at PASS SQLRally in Copenhagen the week after). After his session we had a further discussion about this topic and our experiences on how to use Excels CUBE-functions in order to build nice Dashboards with native Excel functionalities that also work with e.g. Excel Services. Its always great to exchange with people that share the same passion on he same topic! One thing we both agreed on that is missing currently is a way to get the MDX UniqueName of something that is selected in a slicer, filter or simply in a cell using CUBEMEMBER-function. I once used a special Cube Measure which was created in MDX Script which returned the UniqueName of a given member that was selected together with this special measure. For this to work with Excel you need to know how Excel builds the MDX when querying cube values using CUBEVALUE-function. Here is a little example: This produces the following MDX query:
So it basically creates a tuple that contains everything you pass into the CUBEVALUE-Function as a parameter. Knowing this we can create a calculated measure to get the MDX UniqueName of this tuple using MDX StrToTuple()- and MDX AXIS()-function:
MEMBER [Measures].[Excel TupleToStr] AS (
Replacing the [Measures].[Internet Sales Amount] of our initial CUBEVALUE-function with this new measure would return this to Excel:
Ok, so far so good but nothing really useful as you need to hardcode the member’s UniqueName into the CUBEVALUE-function anyway so you already know the UniqueName. However, this is not the case if you are dealing with Pivot Table Page Filters and/or Slicers! You can simply refer to them within the CUBEVALUE-function but you never get the UniqueName of the selected item(s). Well, at least not directly! But you can use the approach described above, using an special MDX calculated measure, to achieve this as I will demonstrate on the next pages.
Calculated measures can only be created using the Pivot Table interface but can also be used in CUBE-functions. So first thing you need to do is to create a Pivot Table and add a new MDX Calculated Measure:
!Caution! some weird MDX coming !Caution!
You may wonder, why such a complex MDX is necessary and what it actually does. What it does is the following: Based on the example MDX query that Excel generates (as shown above) this is a universal MDX that returns the MDX UniqueName of any other member that is selected together with our measure using the CUBEVALUE-function. It also removes the UniqueName of the measure itself so the result can be used again with any other measure, e.g. [Internet Sales Amount] The reason why it is rather complex is that Excel may group similar queries and execute them as a batch/as one query to avoid too many executions which would slow down the overall performance. So we cannot just reference the first element of our query as it may belong to any other CUBEVALUE-function. This MDX deals with all this kinds of issues.
The MDX above allows you to specify only two additional filters but it may be extended to any number of filters that you pass in to the CUBEMEMBER-function. This would be the general pattern:
After creating this measure we can now use it in our CUBE-functions in combination with our filters and slicers:
You may noted that I had to use CUBERANKEDMEMBER here. This is because filters and slicers always return a set and if we would pass in a set to our CUBEVALUE function a different MDX query would be generated which would not allow us to extract the single UniqueNames of the selected items using the approach above (or any other MDX I could think of). So, this approach currently only works with single selections! I hope that the Excel team will implement a native function to extract the UniqueName(s) of the selected items in the future to make this workaround obsolete!
Once we have our UniqeName(s) we can now use them in e.g. a CUBESET-function to return the Top 10 days for a given group of product (filter) and the selected year (slicer):
And that’s it!
So why is this so cool?
It works with SSAS (multidimensional and tabular) and Power Pivot as Excel still uses MDX to query all those sources. It may also work with SAP HANA’s ODBO connector but I have not tested this yet!
It does not require any VBA which would not work in Excel Services – this solution does!
The calculation is stored within the Excel Workbook so it can be easily shared with other users!
There is no native Excel functionality which would allow you to create a simple Top 10 report which works with filters and slicers as shown above or any more complex dynamic report/dashboard with any dynamic filtering.
So no more to say here – Have fun creating your interactive Excel web dashboards!
Note: You may also rewrite any TOPCOUNT expression and use the 4th and 5h parameter of the CUBESET-function instead. This is more native and does not require as much MDX knowledge: However, if you are not familiar with MDX, I highly recommend to learn it before you write any advanced calculations as show above as otherwise the results might be a bit confusing in the beginning! Especially if you filter and use TOPCOUNT on the same dimension!
Almost two years ago I published the first version of an Dynamic ABC Analysis in Power Pivot and by now it is the post with the most comments among all my blog posts. This has two reason:
1) the formula was quite complex and not easy to understand or implement
2) the performance was not really great with bigger datasets
When the first of those comments flew in, I started investigating into a new, advanced formula. At about the same time Marco Russo and Alberto Ferrari published their ABC Classification pattern – a static version using calculated columns – at www.daxpatterns.com. When I had my first dynamic version ready I sent it to Marco and Alberto and asked if they are interested in the pattern and if I can publish it on their website. Long story short – this week the new pattern was released and can now be found here:
It got some major performance improvements and was also designed towards reusability with other models. The article also contains some detailed explanations how the formula actually works but its still very hard DAX which will take some time to be fully understood. The pattern also features some extended versions to address more specific requirements but I think its best to just read the article on your own.
Calculating the Events-In-Progress is a very common requirement and many of my fellow bloggers like Chris Webb, Alberto Ferrari and Jason Thomas already blogged about it and came up with some really nice solutions. Alberto also wrote a white-paper summing up all their findings which is a must-read for every DAX and Tabular/PowerPivot developer.
However, I recently had a slightly different requirement where I needed to calculate the Events-In-Progress for Time Periods – e.g. the Open Orders in a given month – and not only for a single day. The calculations shown in the white-paper only work for a single day so I had to come up with my own calculation to deal with this particular problem.
Before we can start we need to identify which orders we actually want to count if a Time Period is selected. Basically we have to differentiate between 6 types of Orders for our calculation and which of them we want to filter or not:
Starts before the Time Period and ends after it
Starts before the Time Period and ends in it
Starts in the Time Period and ends after it
Starts and ends in the Time Period
Starts and ends after the Time Period
Starts and ends before the Time Period
For my customer an order was considered as “open” if it was open within the selected Time Period, so in our case we need to count only Orders O1, O2, O3 and O4. The first calculation you would usually come up with may look like this:
DISTINCTCOUNT ( ‘Internet Sales’[Sales Order Number] ),
‘Internet Sales’[Order Date]
<= CALCULATE ( MAX ( ‘Date’[Date] ) )
‘Internet Sales’[Ship Date]
>= CALCULATE ( MIN ( ‘Date’[Date] ) )
DISTINCTCOUNT('Internet Sales'[SalesOrderNumber]),-- our calculation, could also be a reference to measure
We apply custom filters here to get all orders that were ordered on or before the last day and were also shipped on or after the first day of the selected Time Period. This is pretty straight forward and works just fine from a business point of view. However, performance could be much better as you probably already guessed if you read Alberto’s white-paper.
So I integrate his logic into my calculation and came up with this formula (Note that I could not use the final Yoda-Solution as I am using a DISTINCTCOUNT here):
DISTINCTCOUNT ( ‘Internet Sales’[Sales Order Number] ),
VALUES ( ‘Date’[Date] ),
‘Internet Sales’[Order Date],
‘Internet Sales’[Ship Date]
To better understand the calculation you may want to rephrase the original requirement to this: “An open order is an order that was open on at least one day in the selected Time Period”.
I am not going to explain the calculations in detail again as the approach was already very well explained by Alberto and the concepts are the very same.
An alternative calculation would also be this one which of course produces the same results but performs “different”:
DISTINCTCOUNT ( ‘Internet Sales’[Sales Order Number] ),
I said it performs “different” as for all DAX calculations, performance also depends on your model, the data and the distribution and granularity of the data. So you should test which calculation performs best in your scenario. I did a simple comparison in terms of query performance for AdventureWorks and also my customer’s model and results are slightly different:
Calculation (Results in ms)
As you can see, the original FILTER-calculation performs worst on both models. The last calculation performs better on the small AdventureWorks-Model whereas on my customer’s model (16 Mio rows) the calculation in the middle performs best. So it’s up to you (and your model) which calculation you should prefer.
The neat thing is that all three calculations can be used with any existing hierarchy or column in your Date-table and of course also on the Date-Level as the original calculation.
In my previous post I showed how to setup a Power BI Data Management Gateway on a non-domain Azure VM. The final setup is also the starting-point for this post where we will use self-signed certificates to use HTTPS/SSL connectivity to our DMG. So make sure that you have all prerequisites up and running before you continue reading.
Basically, the process to switch to HTTPS is pretty straight forward. Simply open your DMG, go to Settings and change from HTTP to HTTPS. Finally select your certificate and you are ready to go! This may work in a corporate hybrid environment where everything is set up correctly but for a non-Azure VM this is a bit more complicated and this is what this post is about.
Besides the initial setup from my previous post there are some steps you need to do in advance in order for HTTPS connectivity to work: 1) Open the port that the DMG HTTPS connection uses in your Windows Firewall (default is port 8050) 2) Create an Endpoint for your Azure VM for the very same port 3) Create a self-signed certificate to be used to establish a secure connection
You should already be familiar with 1) and 2) as you needed to do the same steps also for your HTTP port of your DMG (default is port 8051 here). To create a self-signed certificate you can simply follow the steps as described here. The important thing here is to use the full qualified server name: CN=myserver.cloudapp.net This is very import, otherwise the final connection will not work!
Your MakeCert-command should look similar to this: makecert -r -pe -n “CN=myserver.cloudapp.net” -b 01/01/2000 -e 01/01/2050 -eku 18.104.22.168.22.214.171.124.1 -ss my -sr localMachine -sky exchange -sp “Microsoft RSA SChannel Cryptographic Provider” -sy 12
After you run the command the new certificate is automatically added to your users personal certificates and can be used when setting up HTTPS connectivity for your DMG:
Once you click [OK] it takes some time (~1 Minute) until everything is updated and HTTPS connectivity can be used. Now you can use Excel and Power Query to search for your data sources that are published via OData. You will find all of them but as soon as you try to load the data you will receive the following error:
That’s a bit surprising as the DMG is configured correctly using HTTPS and the very same OData feed worked just fine with HTTP. But here comes the error in my thinking that I was not aware of before talking to Benjamin Tang and Samuel Zhang from the product team. Until that point I always thought that the data is load through the cloud and there is no direct connection from my client to the server: But this is not how it works!
What actually happens in the background is that the request to the Power BI OData service gets redirected to the server and the client connects directly to the server:
And this is also where our PQ error originates as the certificate used is not a trusted certificate on the client. In order to make it a trusted certificate you need to install it on the client. This can be done by following these steps: 1) Launch Internet Explorer using “Run as Administrator” (I’m serious here, this only works with IE but not with e.g. Chrome!) 2) navigate to https://myserver.cloudapp.net:8050 (or whatever servername/port you used) 3) continue to the website and ignore the certificate error 4) press [Cancel] at the popup the asks for credentials 5) now click on the “Certificate error” in the menu bar and press “View certificates” 6) Now install the certificate: (Please note that this option is only available if you are using Internet Explorer launched as Administrator!!!) 7) select the location where you want to store the certificate (Current User or Local Machine depending whether it should be installed for you only or for all users) 8) whichever storage location you used, just make sure that you place the certificate in the “Trusted Root Certification Authorities” on the next page:
Once you have installed the certificate to your Trusted Root Certificate Authorities store the Power Query connections works again but now it is using HTTPS!
Of course this solution is only for demo and testing purposes, in a real world scenario you would already have your certificates in place and everything should indeed work out-of-the-box.
In one of my recent posts I explained how to use the Power BI Data Management Gateway to access data hosted in a SQL Server running on an Azure VM. At the time of writing that post the steps to establish connectivity were not quite intuitive. With the latest Update of the Data Management Gateway (Version 1.2.5303.1 and later) things got a bit easier. However, there is still a little thing that you have to configure to make everything work smoothly. First of all, I highly recommend you to read my first post on this topic to fully understand the actual issue and why it does not work out-of-the-box.
When creating a new Data Source the DMG has to be reachable from the machine on which the Data Source Manager (the Click-Once application where you enter your SQL credentials) is executed. The hostname is derived from the DMG and for Azure VMs this does by default not reflect the hostname under which the VM is reachable from public. The hostname would be “MyServer” whereas the public DNS name is “MyServer.cloudapp.net”. To check what hostname the DMG is using you can execute the following Power Shell command:
In order to change this hostname you can either join the VM to a domain (which is not what we want to do here) or use the following approach:
Open the System settings of your server: You will notice that both, “Computer name” and “Full computer name” show the same name, and both without the suffix “.cloudapp.net”. In order to change this we need to click the “Change settings” button right next to the names to open the System Properties:
Again, click [Change …] to open the computers domain settings: As you can see, the “Full computer name” does not show our required suffix “.cloudapp.net” yet. We can change this in the dialog available via the [More …] Button: Here we can set our “Primary DNS suffix” – we set it to “cloudapp.net” (without leading dot) to reflect our public DNS name.
By clicking [OK] on all open windows you will see the new full name “MyServer.cloudapp.net” now being used as “Full computer name” everywhere. Also our Power Shell command from above now shows the correct hostname. Note that this change also requires a reboot of the VM.
Once the machine is rebooted and DMG is running again you can now use any client machine to create your Data Source which was previously only possible from the server directly and required a RDP connection. Also HTTPS connectivity with self-signed certificates works with this approach which I will show in one of my next posts – so stay tuned!
It is a very common scenario to create a SSAS Tabular Model out of an Power Pivot Model contained in an Excel workbook. Microsoft even created an wizard (or actually a separate Visual Studio project) that supports you doing this. Even further, this process is also a major part of Microsoft’s strategy to cover Personal BI, Team BI and Corporate BI within one technology being xVelocity. This all works just fine but there may also be scenarios where you need to do it the other way round – converting a Tabular model to Power Pivot. Several use-cases come into my mind but I am sure that the most important one is to making data available offline for e.g. sales people to take it with them on their every day work. And in this blog post I will show how this can be done!
But before taking a closer look into how this can be accomplished, lets first see how the import from Power Pivot to SSAS Tabular works. To do this start SQL Server Profiler and connect to your tabular instance. Then create a new Tabular project in Visual Studio based on an existing Power Pivot workbook. At this point you will notice a lot of events happening on our SSAS Tabular server. The most important event for us is “Command End” with the EventSubclass “9 – restore”:
SSAS actually restores a backup from a “Model.abf” backup file which is located in our project directory that we just created:
So far so good – but where does this file come from?
Well, the origin of the file has to be our Excel workbook that we imported. Knowing that all new office formats ending with “x” (.xlsx, .docx, …) are basically ZIP files, we can inspect our original Excel workbook by simply rename it to “.zip”. This allows us to browse the Excel file structure:
We will find a folder called “xl” which contains a sub-folder called “model”. This folder contains one item called “item.data”. If you take a closer look at the file size you may realize that both, the “Model.abf” file that we restored and the “item.data” file from our Excel workbook have the exact same size:
A Coincidence? Not really!
What happens behind the scenes when you import a Power Pivot model into SSAS Tabular is that this “item.data” file gets copied into your project directory and is renamed to “Model.abf” and then restored to the SSAS Tabular workspace instance by using an standard database restore.
Having this information probably makes you think: If it works in one direction, why wouldn’t it also work the other way round? And it does!
So here are the steps that you need to do in order to restore your SSAS Tabular backup into an Excel Power Pivot workbook:
Create a backup of your SSAS Tabular database and rename it to “item.data”
Create an empty Excel workbook and add a simple linked table to the Excel data model (which is actually Power Pivot). This is necessary to tell Excel that the workbook contains a Power Pivot model which has to be loaded once the file is opened.
Close the Excel workbook and rename it from “MyFile.xlsx” to “MyFile.xlsx.zip”
Open the .zip-file in Windows Explorer and locate the “\xl\model\”-folder
Replace the “item.data” file with the file that you created in step 1.
Rename the .zip-file back to “MyFile.xlsx”
Open the Excel Workbook
Voilá! You can now work with the data model as with any other Power Pivot model!
I tested this with a SSAS Tabular backup from SQL Server 2012 SP1 being restored to the streamed version of Excel from Office 365 with the latest version of Power Pivot. I assume that it also works with older versions but have not tested all combinations yet.
There are also some features that will not work, for example roles. If your Tabular database contains roles you will not be able to use this approach. Excel will complain that the Power Pivot model is broken. However, other Tabular features like partitions actually work with the little limitation that you cannot change them later on in the Power Pivot model or process them separately: Another thing to note here is that only up to 3 partitions are allowed, otherwise you will get the same error as for roles. I think this is related to the limitation of 3 partitions for SQL Server Analysis Services Standard Edition as Chris Webb described here.
Besides these obvious features there are also some other cool things that you can do in Tabular which are not possible in Power Pivot. Most (or actually all) of them are accessible only by using BIDS Helper – a great THANK YOU to the developers of BIDS Helper at this point! BIDS Helper enables you to add classical multidimensional features also to Tabular models which is not possible using standard Visual Studio only. Those include:
Translations (metadata only)
I tested it for DisplayFolders and Actions and both are working also in Power Pivot after the backup was restored and I further assume that all the other things will also work just fine. Simply keep in mind that Power Pivot is basically a fully featured Analysis Services instance running within Excel!
For my (and your) convenience I also created a little PowerShell script that does all the work:
The last thing to mention here is that I don’t know if this is officially supported in any way by Microsoft – actually I am pretty sure it is not – so watch out what you are doing and don’t complain if something is not working as expected.