Processing Azure Analysis Services with OAuth Sources (like Azure Data Lake Store)

As you probably know from my last blog post, I am currently upgrading the PowerBI reporting platform of one of my customer from a PowerBI backend (dataset hosted in PowerBI service) to an Azure Analysis Services backend. The upgrade/import of the dataset into Azure Analysis Services itself worked pretty flawless and after switching the connection of the reports everything worked as expected and everyone was happy. However, things got a bit tricky when it came to automatically refreshing the Azure Analysis Services database which was based on an Azure Data Lake Store. For the original PowerBI dataset, this was pretty straight forward as a scheduled refresh from an Azure Data Lake store data source works out of the box. For Azure Analysis Services this is a bit different.

When you build and deploy your data model from Visual Studio, your are prompted for the credentials to access ADLS which are then stored in the data source object of AAS. As you probably know, AAS uses OAuth authentication to access data from ADLS. And this also causes a lot of problems. OAuth is based on tokens and those tokens are only valid for a limited time, by default this is 2 hours. This basically means, that you can process your database for the next 2 hours and it will fail later on with an error message saying that the token expired. (The above applies to all OAuth sources!)
This problem is usually solved by using an Azure Service Principal instead of a regular user account where the token does not expire. Unfortunately, this is not supported at the moment for ADLS data sources and you have to work around this issue.

So the current situation that we need to solve is as follows:

  • we can only use regular user accounts to connect AAS to ADLS as service principals are not supported yet
  • the token expires after 2 hours
  • the database has to be processed on a regular basis (daily, hourly, …) without any manual interaction
  • manually updating the token is (of course) not an option

Before you continue here, make sure that you read this blog post first: https://blogs.msdn.microsoft.com/dataaccesstechnologies/2017/09/01/automating-azure-analysis-service-processing-using-azure-automation-account/
It describes the general approach of using Azure Automation to process an Azure Analysis Services model and most of the code in this blog post if based on this!
Also this older blog post will be a good read as some concepts and code snippets are reused here.

Back to our example – as we were already using Azure Automation for some other tasks, we decided to also use it here. Also, PowerShell integrates very well with other Azure components and was the language of choice for us. To accomplish our goal we had to implement 3 steps:

  1. acquire a new OAuth token
  2. update the ADLS data source with the new token
  3. run our processing script

I could copy the code for the first step more or less from one of my older blog post (here) where I used PowerShell to acquire an OAuth token to trigger a refresh in PowerBI.

The second step is to update ADLS data source of our Azure Analysis Services model. To get started, the easiest thing to do is to simply open the AAS database in SQL Server Management Studio and let it script the existing datasource for you: AAS_Script_OAuth_DataSource
The resulting JSON will look similar to this one:

Update ADLS DataSource
{
  "createOrReplace": {
    "object": {
      "database": "Channel Analytics",
      "dataSource": "DS_ADLS"
    },
    "dataSource": {
      "type": "structured",
      "name": "DS_ADLS",
      "connectionDetails": {
        "protocol": "data-lake-store",
        "address": {
          "url": "https://mydatalake.azuredatalakestore.net"
        }
      },
      "credential": {
        "AuthenticationKind": "OAuth2",
        "token_type": "********",
        "scope": "********",
        "expires_in": "********",
        "ext_expires_in": "********",
        "expires_on": "********",
        "not_before": "********",
        "resource": "********",
        "id_token": "********",
        "kind": "DataLake",
        "path": "https://mydatalake.azuredatalakestore.net/",
        "RefreshToken": "********",
        "AccessToken": "********"
      }
    }
  }
}

The important part for us is the “credential” field. It contains all the information necessary to authenticate against our ADLS store. However, most of this information is sensitive so only asterisks are displayed in the script. The rest of the JSON (except for the “credential” field) is currently hardcoded in the PowerShell cmdlet so if you want to use it, you need to change this manually!
The PowerShell cmdlet then combines the hardcoded part with an updated “credential”-field which is obtained by invoking a REST request to retrieve a new OAuth token. The returned object is modified a bit in order to match the required JSON for the datasource.
Once we have our final JSON created, we can send it to our Azure Analysis Services instance by calling the Invoke-ASCmd cmdlet from the SqlServer module.
Again, please see the original blog post mentioned above for the details of this approach.

After we have updated our datasource, we can simply call our regular processing commands which will then be executed using the newly updated credentials.
The script I wrote allows you to specify which objects to process in different ways:

  • whole database (by leaving AASTableName and AASPartitionName empty)
  • a single or multiple table and all its partitions (by leaving only AASPartitionName empty)
  • or multiple partitions of a single table (by specifying exactly one AASTableName and multiple AASPartitionNames

If multiple tables or partitions are specified, the elements are separated by commas (“,”)

So to make the Runbook work in your environment, follow all the initial steps as described in the original blog post from Microsoft. In addition, you also need to create an Application (Type = “Native”) in your Azure Active Directory to obtain the OAuth token programmatically. This application needs the “Sign in and read user profile” permission from the API “Windows Azure Active Directory (Microsoft.Azure.ActiveDirectory)”:
AAD_App_Permissions
Also remember the ApplicationID, it will be used as a parameter for the final PowerShell Runbook (=parameter “ClientID”!
When it comes to writing the PowerShell code, simply use the code from the download at the end of this blog post.

For the actual credential that you are using, make sure that it has the following permissions:

  • to update the AAS datasource (can be set in the AAS model or for the whole server)
  • has access to the required ADLS files/folders which are processed (can be set e.g. via ADLS Data Explorer)
  • (if you previously used your own account to do all the AAS and ADLS development, this should work just fine)

In general, a similar approach should work for all kinds of datasources that require OAuth authentication but so far I have only tested it with Azure Data Lake Store!

Download: AAS_Process_OAuth_Runbook.ps1

Upgrading your reports from PowerBI to Azure Analysis Services

Since April 2017 it is possible to build reports on top of datasets that are hosted in the PowerBI service. This was announced and described here and here in more detail. This might not seem like a big deal at first sight, but it can have a huge impact on how you work with PowerBI. By separating the data model from the report, you can have two or more independent people working with the same dataset. Also, the people who build the reports in the end most not necessarily have the knowledge to build a data model – the just need to use it. So, there are some clear advantages when you split up your workbook:

  • separation of duty (data modeler vs. report builder)´
  • any number of reports on top of the same model
  • easy control over reports as the files are quite small (they only contain the definition of the report)

This is all pretty cool and, from my point of view, the way to go once you want to use the reports in production and/or have several people working on/with the same reports.

But lets go a step further. After some time, as your data model grows, you realize that the reports get slow and also the processing takes a considerable amount of time to finish. The official upgrade path will then guide you to Azure Analysis Services and you will migrate your data model to deal with the larger data volumes and make use of the flexibility in processing you gained by your upgrade. This migration process is very well described here.

So far so good, but what happens to your reports? Last week I was in exactly the position described above and we had to migrate the existing reports (which were base on a dataset hosted in PowerBI) to Azure Analysis Services. As of now, there is now simple way to simply change the connection string from PowerBI to Azure Analysis Services neither in PBI Desktop nor in the Service. But we could think of some options how it might work:

  • rebuild all reports
  • use the REST API to update the connection string of the existing reports
  • modify the .pbix file manually (NOT OFFICIALLY SUPPORTED)

As you can imagine, rebuilding all reports was not really an option.

The next option, the PowerBI REST API looked pretty promising at first sight. It allows you to retrieve and set the dataset that is used by your report. So the idea is to simply create a new dataset which points to Azure Analysis Services in Live Query mode, take the existing report and use the Rebind API call to bind it to the new AAS dataset. Even though this is supposed to work, I could not make it work in my environment. I tried all things that I could think of but nothing work and I also gave up on this.

So I was stuck there but knew that the information of the data source has to be somewhere in the .pbix file. In the past I already did something similar with Excel/PowerPivot files (“Restoring a SSAS Tabular model to PowerPivot”) so I thought I would also give it a try for .pbix files. And it turns out that they are quite similar. For those of you who are new to this, most (if not all) of the files that are associated with a Microsoft tool and end with “x” (e.g. .xlsx/docx/…) are just ZIP-files in the end. To unzip them, simply rename them to .zip and use your favorite zip-tool to open them. You will see a file-structure similar to the one below:
pbix zip file content

(If your file contains a data model, you see a file called “DataModelSchema” instead of “Connections”. The next steps will not work in this case!). However, in our case, as the report is linked to a dataset hosted in the PowerBI service, our file does not contain any data itself but only the connection information to our data source. As you can guess, this information is stored in the “Connections” file.

To see what a connection to an Azure Analysis Services dataset looks like, I simply created a new PowerBI desktop model and established a Live Connection. Saved it and opened it again as zip file. The Connection file itself is just a JSON but the details are not really relevant here. I simply replaced the Connections file from my original report with the one from my new workbook linked to AAS. Renamed it back to pbix, opened it and voilà, my report was connected to AAS!

This saved us a lot of time and we could move all of our reports within a couple of hours!

Please keep in mind, that this is not officially supported and might break your model. So make sure to always create a backup before you modify the contents of a pbix file manually!
I do not take any responsibility for any broken models or anything else that might happen!

C# Wrapper for Power BI REST API – Version 2

Some time ago, when Microsoft released the first version of Power BI Rest API I already wrote a wrapper for C# which allowed you to map the object from the API into regular C# objects and work with them locally. However, there have been some major upgrades since then (Actually they were already announced in July 2016 but I did not find anytime to work on this again until now Smile ). Anyway, I just published a new version of my C# wrapper on my GitHub site: https://github.com/gbrueckl/PowerBI.API.Client

To use it, you first need to create an Azure AD application and get an ApplicationID – this is very well described here or can be done directly at https://dev.powerbi.com/apps

A new PowerBI API Client object can then be created using the ApplicationID:

Create a PowerBI API Client
PBIAPIClient pbic = new PBIAPIClient(ApplicationID);

Basically, most of the features described in the API reference are also included in the API Wrapper. So you can now use C# to create your PowerBI model locally and deploy it to the PowerBI service! The only to keep in mind is that the dataset you create via the API can only be sourced by pushing data into it using the Push/Streaming API. As this can be quite cumbersome sometimes, I also added the functionality to publish a whole C# DataTable with basically just two lines of code to publish your reference data/dimensions:

Publish DataTable to PowerBI
// create a regular DataTable – but could also be derived from a SQL Database!
DataTable dataTable = new DataTable();
/* populate the dataTable */
// create a PBI table from a regular DataTable object
PBITable productsTable = new PBITable(dataTable);
// publish the table and push the rows from the dataTable to the PowerBI table
productsTable.PublishToPowerBI(true);

 

This snippet basically deploys the table structure to the PowerBI service and populates it with data from the DataTable:
Published_DataTable

 

For your “fact”-data you can also create single rows on your own using the PBIRow-object and publish them manually e.g. for WriteBack-scenarios:

Publish Rows to PowerBI
salesTable.DeleteRowsFromPowerBI();
PBIRow row = salesTable.GetSampleRow();
row.SetValue("ProductKey", 1);
row.SetValue("SalesDate", DateTime.Now);
row.SetValue("Amount_BASE", 100);
salesTable.PushRowToPowerBI(row);

Depending on the type of DataSet you choose (Push, PushStreaming or Streaming), you can also create DAX Measures or Relationships:

Add Measures and Relationships
salesTable.Measures.Add(new PBIMeasure("Sales Amount", "SUM('{0}'[{1}])", tableNameFacts, "Amount_BASE")); // adding a measure
dataset.Relationships.Add(new PBIRelationship("MyRelationship", salesTable.GetColumnByName("ProductKey"), productsTable.GetColumnByName("ProductKey")));

 

Of course, all the features that were already supported in the first version, are still supported:

  • Get Embed-URLs of Reports and Tiles
  • List Reports, Dashboards, Datasets, …

The new version also supports Streaming and PushStreaming datasets in the same way as it does for regular Push datasets. For details on Streaming datasets please take a look at Real-time streaming in PowerBI

I recommend to explore the API on your own by simply building your first PowerBI Push/Streaming Model on your own!
For the latest features and improvements please refer to the GitHub repository which will be updated frequently.

Any feedback and participation in the further development is highly appreciated and will be done via the GitHub repository.

Monitoring Azure Data Factory using PowerBI

Some time ago Microsoft released the first preview of a tool which allows you to monitor and control your Azure Data Factory (ADF). It is a web UI where you can select your Pipeline, Activity or Dataset and check its execution state over time. However, from my very personal point of view the UI could be much better, especially much clearer(!) as it is at the moment. But that’s not really a problem as the thing I like the most about ADF is that its quite open for developers (for example Custom C#/.Net Activities) and it also offers a quite comprehensive REST API to control an manage it.
For our monitoring purposes we are mainly interested in the LIST interface but we could do basically every operation using this API. In my example I only used the Dataset API, the Slices API and the Pipeline API.

First we start with the Dataset API to get a list of all data sets in our Data Factory. This is quite simple as we just need to build our URL of the REST web service like this:

  1. https://management.azure.com/subscriptions/{SubscriptionID}/resourcegroups/{ResourceGroupName}/providers/Microsoft.DataFactory/datafactories/{DataFactoryName}/datasets?api-version={api-version}

You can get all of this information for the Azure Portal by simply navigating to your Data Factory and checking the URL which will be similar to this one:

  1. https://portal.azure.com/#resource/subscriptions/1234567832324a04a0a66e44bf2f5d11/resourceGroups/myResourceGroup/providers/Microsoft.DataFactory/dataFactories/myDataFactory

So this would be my values for the API Call:
– {SubscriptionID} would be “12345678-3232-4a04-a0a6-6e44bf2f5d11”
– {ResourceGroupName} would be “myResourceGroup”
– {DataFactoryName} would be “myDataFactory”
– {api-version} would be a fixed value of “2015-10-01”

Once you have your URL you can use PowerBI to query the API using Get Data –> From Web
Next you need to authenticate using your Personal or Organizational Account – the same that you use to sign in to the Portal – and also the level for which you want to use the credentials. I’d recommend you to set it either to the subscription level or to the data-factory itself, depending on your security requirements. This ensures that you are not asked for credentials for each different API:
ADF_PowerBI_Authentication

This works in a very similar way also for the Slices API, the Pipeline API and all other APIs available! The other transformations I used are regular PowerQuery/M steps done via the UI so I am not going to describe them in more detail here. Also, setting up the relationships in our final PowerPivot model should be straight forward.

Now that we have all the required data in place, we can start with our report. I used some custom visuals for the calendar view, some slicers and a simple table to show the details. I also used a Sankey Chart to visualize the dependencies between the datasets.

ADF_PowerBI_Monitoring_Dashboard
ADF_PowerBI_Monitoring_Dependencies

Compared to the standard GUI for monitoring this provides a much better overview of slices and their current states and it also allows easy filtering. I am sure there are a lot of other PowerBI visualizations which would make a lot of sense here, these are just to give you an idea how it could look like, but of course you have all the freedom PowerBI offers you for reporting!

The only drawback at the moment is that you cannot reschedule/reset slices from PowerBI but for my monitoring-use-case this was not a problem at all. Also, I did not include the SliceRun API in my report as this would increase the size of the data model a lot, so detailed log information is not available in my sample report.

The whole PowerBI template is available for download on my GitHub site: https://github.com/gbrueckl/Azure.DataFactory.PowerBIMonitor

Debugging Custom .Net Activities in Azure Data Factory


UPDATE 2017-02-22:
I released a new toolset for Azure Data Factor which also integrates the Customer .Net Activity Debugger from this blob post. Please refer to the new GitHub project: https://github.com/gbrueckl/Azure.DataFactory.LocalEnvironment

(all links have been changed to refer to the new repository!)


Azure Data Factory (ADF) is one of the newer tools of the whole Microsoft Data Platform on Azure. It is Microsoft’s Data Integration tool, which allows you to easily load data from you on-premises servers to the cloud (and also the other way round). It comes with some handy templates to copy data fro various sources to any available destination. However, when the Extract-Transform-Load (ETL) or ELT steps get more complicated you will hit the (current) out-of-the-box limits of Azure Data Factory pretty soon. But this is OK as ADF is a very open platform and allows you to integrate so called “Custom Activities”. These can either be .Net/C# Activities or HDInsight Activities. In this post we will focus on .Net Activities and how to develop and debug them in an efficient way.

A .Net Activity is basically just a .dll which implements a specific Interface (IDotNetActivity)and is then executed by the Azure Data Factory. To be more precise here, the .dll (and all dependencies) are copied to an Azure Batch Node which then executes the code when the .Net Activity is scheduled by ADF. So far so good, but the tricky part is to actually develop the .Net code, test, and debug it. Well, not the code itself but the more or less complex integration with the ADF Interface which you are very likely not familiar with in the beginning. In such cases it usually helps to run the code locally, step into the different code paths and examine the C# objects and their values. The problem is that you do not have a local instance of ADF on your workstation which you could use the start the .Net Activity and debug it interactively in Visual Studio.
So I wrote my own tool which you can add to the Solution that already contains the code of your Custom .Net Activity. Then you can simply link the CustomActivityDebugger to the JSON definitions and configurations of your ADF project, reference your custom code, configure some other things like SliceStart/SliceEnd and you are ready to go.
Once you start the CustomActivityDebugger it will read all ADF files and settings and basically create a local ADF environment which helps you to debug your custom .Net Activity using all settings and parameters as they would be passed in when the code is executed on the Azure Batch Node.

This little picture shows the CustomActivityDebugger in action – debugging custom .Net activities is now like debugging any other code:
Debugger_in_Action

All the sources including a simple ADF Project, a simple Custom Activity and setup instructions are available on my GitHub site:

https://github.com/gbrueckl/Azure.DataFactory.CustomActivityDebugger
https://github.com/gbrueckl/Azure.DataFactory.LocalEnvironment

Feel free to use it as it is and/or extend it to your needs.

Upcoming Conferences – Fall and Winter 2016

The next months are going to be quite busy for me. There will be a lot conferences and events which I will attend and where I am also speaking. I am always happy to attend those events as it is a great place to interact with other community members and learn new stuff about the latest and coolest new tools. So lets get started:

The first conference is the SQL Saturday #546 in Oporto (Portugal) on the 1st of October where I will be speaking about “What’s new in Analysis Services 2016”.
PASS_SQL_Saturday_546_Oporto
It is the first time for me to speak in Porto but from what I heard from other people it is said to be a really great event so I am really looking forward to it!

The week after – on the 6th of October – I will be speaking at the “Virtuelle SQL Server 2016 Konferenz” (German only) where I will be doing a session on SQL Server Polybase. As the name implies, it is virtual and you can attend from wherever you are at the time. Another important thing is that it is for free! You just need to register and you’ll get two full days of sessions around SQL Server and the Microsoft Data Platform.

In the same week, the SQL Saturday #555 in Munich (Germany) is taking place.
PASS_SQL_Saturday_555_Munich
Another free event for all community members. Again, just make sure you register in time to get a ticket! I will do a more advanced session on Polybase there with a more technical focus. So even if you attended the virtual event before, this might still be of great value for you!

Later in October, the world’s biggest conference about the Microsoft Data Platform and all other related technologies – the PASS Summit – will be in Seattle again, attracting 4,000+ attendees every year.
PASS_Summit_2016
It’s probably a bit late but you can still register for this great event. Unfortunately, I forgot to submit some sessions for this conference so I won’t be there this year. (Note to myself: submit sessions for 2017!) Having been there already several times, I can only recommend this conference to anyone who is into Microsoft, it’s Data Platform and everything around it.

For January 2017, I am very happy to announce that our SQL Saturday #579 in Vienna (Austria) is taking place the 4th year in a row now.
PASS_SQL_Saturday_579_Vienna
We had 250+ attendees last year and we are definitely aiming to hit the 300 mark this year! Again, it is free and you just need to register as soon as possible. (In case you cannot make it later on, please unregister as early as possible as otherwise you take away the slot of someone else). The Call for Papers is still open so if you want to speak at the event, simply submit your session details there.

Last but not least, the SQL Server Konferenz 2017 will be back in Darmstadt as in the previous years.
PASS_SQL_Konferenz_2017
This means 3-4 full days of sessions and trainings around the Microsoft Data Platform. Overall an awesome event organized by PASS Germany. It is also the 5th time in a row this year and believe me, they have organized and optimized everything down to the tiniest detail making it one of the best and biggest conferences in whole Europe!
I already submitted some sessions and I am hoping to be selected to speak there.

As you can see, a lot of things are ahead but I am really looking forward to all of them!

I just received my first Microsoft MVP Award!

Almost exactly 10 years ago I started my career in the field of Data Warehousing and Business Intelligence using the Microsoft Data Platform. Every since I have been active in the community, sharing my experiences and engaged with other people in various forums.
I have to admit that most of the things that I know today I originally learned from my strong interaction with the community and by solving (or trying to solve) problems of other community members that I came across when browsing e.g. the MSDN forums. This exchange is always a win-win situation for both sides as you grow with every challenge you master and that’s what the community is all about – having someone to discuss your issues with and solve them together!

Therefore I am very proud that I got awarded with the Microsoft MVP Award for my work in the community which includes blogging, answering forum questions, speaking at conferences and supporting local Microsoft events – or, to say it in a more general way, my engagement in the community!

Microsoft_MVP_Logo_Horizontal_Preferred_Cyan300_CMYK_300ppi

Thanks to everyone who I ever worked with, letting me learn on their problems, inspiring me to start my blog and keep being active in the community over the last years! Thank You!

Visualizing SSAS Calculation Dependencies using PowerBI

 

UPDATE: This does not work for Tabular Models in Compatibility Level 120 or above as they do not expose the calculation dependencies anymore!

 

One of my best practices when designing bigger SQL Server Analysis Services (SSAS) Tabular models is to nest calculations whenever possible. The reasons for this should be quite obvious:

  • no duplication of logics
  • easier to develop and maintain
  • (caching)

However, this also comes with a slight drawback: after having created multiple layers of nested calculations it can be quite hart to tell on which measures a top-level calculations actually depends on. Fortunately the SSAS engine exposes this calculation dependencies in one of its DMVs – DISCOVER_CALC_DEPENDENCY.
This DMV basically contains information about all calculations in the model:

  • Calculated Measures
  • Calculated Columns
  • Relationships
  • Dependencies to Tables/Columns

Chris Webb already blogged about this DMV some time ago and showed some basic (tabular) visualization within an Excel Pivot table (here). My post focuses on PowerBI and how can make the content of this DMV much more appealing and visualize it in a way that is very easy to understand.
As the DMV is built up like a parent-child hierarchy, I had to use a recursive M-function to resolve this self-referencing table which actually was the hardest part to do. Each row contains a link to a dependent object, which can have other dependencies again. In order to visualize this properly and let the user select a Calculation of his choice to see a calculation tree, I needed to expand each row with all of its dependencies, keeping their link to the root-node:

Here is a little example:

Object Referenced_Object
A B
B C

The table above is resolved to this table:

Root Object Referenced_Object
A A B
A B C
B B C

The Root-column is then used to filter and get all dependent calculations.
The PowerBI file also contains some other M-functions but those are mainly for ease-of-use and to keep the queries simple.

Once all the data was loaded into the model, I could use one of PowerBI’s custom visuals from the PowerBI Gallery – the Sankey Chart with Labels
SSAS_Visualizing_Tabular_Calc_Dependencies

Here is also an interactive version using the Publishing Feature of Power BI:

 

You can use the Slicers to filter on the Table, the Calculation Type and the Calculation itself and the visual shows all the dependencies down to the physical objects being Tables and Columns. This makes it a lot easier to understand your model and the dependencies that you built up over time.
I attached the sample-PowerBI-file below. You simply need to change the connectionstring to your SSAS Tabular Server and refresh the data connections.

The PowerBI-file (*.pbix) can be downloaded here: SSAS_CalcDependencies.pbix

C# Wrapper for Power BI REST API


UPDATE 2017-05-18:
I released a new version of this project and also published it on GitHub: https://github.com/gbrueckl/PowerBI.API.Client
A blog post which refers to the updates can be found here.


Since the last major update last year, Power BI offers some APIs which can be used to interact with content and also data that is stored in Power BI. Microsoft provides a good set of samples on how to use the APIs on GitHub and also a an interactive APIARY web-UI which you can use to build and test API calls on-the-fly. However, it can still be quite cumbersome as you have to deal with all the REST API calls and the returned JSON on your own. So I decided to write a little C# Wrapper where you simply pass in your Azure AD Application Client ID and you can deal with all Object of the Power BI API as they were regular C# objects.

Here is a little example on how to list all available reports and get the EmbedURL of a given tile using the PowerBIClient:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using pmOne.PowerBI;
using pmOne.PowerBI.PowerBIObjects;

namespace SampleApplication
{
    class Program
    {
        static void Main(string[] args)
        {
            PowerBIClient pbic = new PowerBIClient(“ef4aed1a-9cab-4bb3-94ea-ffffffffffff”);

            Console.WriteLine(“Available Reports:”);
            foreach(PBIReport pbir in pbic.Reports)
            {
                Console.WriteLine(pbir.Name);
            }

            Console.WriteLine();
            Console.WriteLine(“Get EmbedURL for Tile [Retail Analysis Sample].[This Year’s Sales]”);
            Console.WriteLine(pbic.GetDashboardByName(“Retail Analysis Sample”).GetTileByName(“This Year’s Sales”).EmbedURL);

            Console.WriteLine(“Press <Enter> to exit …”);
            Console.ReadLine();
        }
    }
}

As you can see, its pretty simple and very easy to use, even for non-developers. You can find all the source-code and the sample application for download below. The code as I have written it is very likely not the best code possible, but it works for my needs, is straight forward, simple and saves me a lot of work and time when dealing with the PowerBI API. Also, if the API changes, you may need to adopt the code accordingly. However, for the future I hope that Microsoft provides some metadata so that VisualStudio can build all this code automatically using e.g. Swagger. But for the time being feel free to use, improve or extend my code Smile

SourceCode: PowerBIClient_Source.zip

Open Analysis Services Tabular Database Online

Those of you who have been working with SSAS Multidimensional in the past probably know that you can connect online to their SSAS database via Visual Studio / Data Tools.
Open_SSAS_DB_Visual_Studio

Any change you make (and save) online, will be directly deployed to the server and is the visible to the end-user immediately. This can be very convenient if you want to quickly check something or do some hot-fixes (e.g. changing the MDX script).  But be aware, structural changes might require you to process the changed and dependent objects so be sure about what you are changing online, especially if you are connecting to a productive environment!

I am quite sure that everyone who works with SSAS Tabular has also tried this feature for his Tabular database and ended up with the following error message:
”You are trying to connect to <servername> server running in tabular mode using the Tabular Model Designer. The option to open an Analysis Services Database is supported for servers running multidimensional mode only.”
Open_SSAS_DB_Visual_Studio_Tabular

So this simply does not work out of the box. However, there is a neat workaround which allows you to connect to your online SSAS Tabular database and do any changes you want. The idea behind this is to use the online database as our workspace database.

The first thing to do is to open Visual Studio / Data Tools and import the existing database into a new Project:
Visual_Studio_Import_SSAS_DB_Tabular

Then you need to select your workspace server. If there is no pop-up asking your for a workspace server, then you have already configured a default one which will be used in this case. As we are going to change this in the next step again, it does not really matter which workspace server you choose.

Now you are asked for the database which you want to import – choose the one that you want to connect to online. Once the import process is finished, Visual Studio already creates a workspace database for you and names it as follows: “<VS ProjectName>_<NT-Username>_<random GUID>” – in my case it was “TabularProject1_gbrueckl_b44f11de-21f4-4d18-bf67-0c25652fceba”. Any change you make in Visual Studio will be deployed directly to this database. Closing Visual Studio will unload the workspace database from memory by default.

Having all this information you probably can imagine where all this is leading to. The workspace database name and server can be configured and are persisted in the user-specific “Model.bim_<user>.settings”-file located in your project folder:

Model.bim_gbrueckl.settings
<?xml version=1.0 encoding=utf-8?>
<ModelUserSettings xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance xmlns:xsd=http://www.w3.org/2001/XMLSchema>
  <ServerName>localhost\TAB2014</ServerName>
  <DatabaseName>AdventureWorksDW2012_Online_gbrueckl_b44f11de-21f4-4d18-bf67-0c25652fceba</DatabaseName>
  <DbRetention>OnDisk</DbRetention>
  <SnapshotBackup>DoNotKeepBackup</SnapshotBackup>
  <Annotations />
  <IsRecalcRequired>false</IsRecalcRequired>
  <IsImpersonationModified>false</IsImpersonationModified>
  <CheckForImpersonationWarning>false</CheckForImpersonationWarning>
  <RequirePastedTablesUpgrade>false</RequirePastedTablesUpgrade>
  <TruncatedTables />
  <IsPowerPivotMetadataScriptExecuted>false</IsPowerPivotMetadataScriptExecuted>
  <IsASImport>false</IsASImport>
  <IsPowerPivotImport>false</IsPowerPivotImport>
  <SelectedCompatibilityLevel>300</SelectedCompatibilityLevel>
</ModelUserSettings>

The settings you need to change here should be obvious – <ServerName> and <DatabaseName>. Change them to match your online SSAS Tabular server and database – the one you previously imported the project from.
But be sure to also change the <DbRetention>! Leaving the default “OnDisk” here would unload your database once you close Visual Studio what is definitely not what you want! You need to set this setting to “InMemory” to keep the database in memory – remember, we want to connect online but keep the database accessible once we are done.

Before you do all this changes you should close your Visual Studio solution completely to ensure there is nothing cached internally. Then simply open the .settings-file, do the changes described above and re-open your solution. If you have done everything correctly you should already see data for all of your tables:
Visual_Studio_SSAS_Tabular_with_Data
This is already the live-data that resides in your online database!

Congratulations! You are now connected online to your SSAS Tabular Database!

This approach can also be very useful if you are working with a backup of a SSAS Tabular database as it allows you to make online changes to the restored database and see all the data that exists in the database. Importing only the database project without this little hack would leave you with an empty database project which is very hard to work with if you need to create new calculations. Further, this also does not require you to reprocess the whole database which might not even be possible if you have no connection the the data sources underneath!

But before you get too much excited about this, there are some more things to keep in mind:

  • This is not officially supported by Microsoft
  • This was just experimental but proved (for me) to be very handy in some scenarios
  • Opening a SSAS Tabular Solution in Visual Studio sends and ALTER statement to the workspace database (in this case this is your productive database!) and updates the server with the metadata defined in your local .bim-file. If your server database changes frequently, this is probably not what you want as you would overwrite changes done by someone else recently. To work around this issue you would need to re-create/import your SSAS project every time before making any online changes to make sure you are always re-deploying the current state of the database when opening your local SSAS project.
  • I am not responsible for any data loss, damage or whatsoever!

If you want to use this approach to deploy hot-fixes and this happens frequently, you may also consider using a more professional approach for this – for example the BISM Normalizer Visual Studio Add-In which allows you to select the changes that you want to deploy to a target server, similar to schema compare for SQL Server.