My last post (oh, it was too long ago) Dynamic Security Part 2 explained how to implement dynamic security in a dimensional model using MDX. I only touched the surface of this topic and there are other much more capable folks you can follow-up with to find out more. Keep in mind too there is also an excellent PASS Analytics Conference on the horizon. If you can’t find your answers there then there are no answers to be found.
As a reminder, the overall strategy was to have a security model built to handle all models: dimensional, relational, and tabular. For this post we’ll talk about the tabular model. What you’ll find is the differences between setting up the tabular security is not that much different than the dimensional model. Let’s take a look.
Setting up Roles in a Tabular Model
Roles in the tabular world are similar to those in SSAS except tabular uses the DAX language instead of MDX. Roles are defined within the SSDT tool and DAX filters are defined, much like SSAS, at the dimension level. All measures must be defined beforehand in the tabular project. Once the role is created and defined you then deploy the role to the tabular instance in order for the role to take effect. You do have the option to define roles in SSMS but these roles would get overwritten once a new project is deployed.
We are still interested in creating two roles. One role will be specific to the CustomData value used by Performance Point and the other will be used exclusively for the EffectiveUserName and used by every other tool. The first step is to create a role in your BISM model. To do this you click on the role icon in the toolbar. Once you open the role tab you should see a role called Administrator. You will need to create two new roles. As in previous examples, we’ve created a role called CustomData and a role called EffectiveUserName. Both roles only need Read permission.
After you create the role you now need to define the filter for the dimension to which you want to apply the security. Starting with the CustomData role we will add the following DAX filter to the DIM_CLIENT dataset:
=DIM_CLIENT[CLIENT_DIM_KEY]=LOOKUPVALUE(FACT_USER_CLIENT[CLIENT_DIM_KEY],DIM_USER[USER_NAME], CUSTOMDATA(), FACT_USER_CLIENT[CLIENT_DIM_KEY], DIM_CLIENT[CLIENT_DIM_KEY])
Luckily we find that both MDX and DAX use the same CUSTOMDATA() filter for CustomData. You will also notice that the filter is applied on the DIM_CLIENT dimension in the model. This is defined in SSDT and then processed up to the server.
Next we want to apply the filter for the EffectiveUserName role. The process is exactly the same only we use the USERNAME() function in the DAX filter:
=DIM_CLIENT[CLIENT_DIM_KEY]=LOOKUPVALUE(FACT_USER_CLIENT[CLIENT_DIM_KEY],DIM_USER[USER_NAME], USERNAME(), FACT_USER_CLIENT[CLIENT_DIM_KEY], DIM_CLIENT[CLIENT_DIM_KEY])
Remember that no members are needed for the CustomData role because the role is defined in the Performance Point connection. You will need to add the account specified
in the data connection to the EffectiveUserName role in order to capture that account as they enter the system. You will also want to add individual user accounts to the role because this would capture any users connecting through an Excel client.
Configuring the Data Source for Power View
Power View uses the EffectiveUserName model which means any users using Power View will need to be a member of a role in the tabular model. This also means a connection account must be defined in the connection string this account needs to be given administrator rights to the tabular instance.
Provider=MSOLAP.5;Integrated Security=SSPI;Persist Security Info=True;Data Source=server07\sql2012astabular;Initial Catalog=TestTabular
The tablular instance is server07\sql2012astabular and the tabular cube is called TestTabular. The access account is listed in the connection string properties as a stored credential. You will also need to check both the Use as Windows Connection and the Set execution context to this account options. Unfortunately the account listed in the connection string must be an administrator on the tabular instance. Still, the password is hidden the connection and, as administrator, you can create data connections ahead of time.
Though not necessarily the simplest thing to do – there is a means to have dynamic security model that serves all BI models and tools. If you want to dig even deeper and stretch this model out beyond the current limitations then I suggest reading the paper Analysis and Reporting Services, Design and Implementation: A SQL Server 2008 Technical Case Study. One of the current limitations is that the security is only updated when the cube or tabular model is processed. This can be a significant restriction, especially if changes are frequent. I would also recommend looking into developing a means to manage the security tables by using a custom administrative GUI of some nature. Still, for quick and dirty dynamic security this will do the job.
In my previous blog post
I walked through how to create a simple security model. Granted, the model is basic and simplistic but with a little ingenuity you should be able to model out some fairly complex hierarchies.
The next step is to create the roles. Remember that any access to a BISM or OLAP system is through role security. The trick is to have roles capture the correct login account and then leverage the security model to filter the data. I’ll create two roles. For illustrative purposes I’ll call one CustomData and the other EffectiveUserName. CustomData and EffectiveUserName are key concepts for dynamic security. Both are metadata in the SharePoint data connection for BISM, relational and OLAP.
For example, when you connect to Performance Point using an unattended service account the connection you would see in a trace would be the service account. Though the connection logs in as the service account there is additional connection detail showing the <CustomData> value as the account logged into SharePoint. So if UserA logs into SharePoint and runs PerformancePoint, a connection will be made with ServiceAccountA but the CustomData value will be <CustomData>UserA<CustomData>. CustomData is simply a string value. If the user is a domain user than the CustomData value will contain the domain name\username, otherwise; the value would only contain a name. As of SharePoint 2010, only Performance Point uses the CustomData value. Everything else uses EffectiveUserName.
Like CustomData, EffectiveUserName is metadata contained in the connection. The difference is whereas CustomData is a string value, EffectiveUserName is an Active Directory Account. It will always have a domain\username value.
The reason we need two roles is our filters must use one function to access CustomData and another function to access EffectiveUserName. This means we can’t have two filters in the same attribute for the same role. This restriction will change in SharePoint 2013 when everything will use EffectiveUserName. We can avoid PPS from using the EffectiveUserRole because we specify the role to use in the data connection. The CustomData role will have the accounts as members of the role as well as the service account. Why do you need the individual users included if everything is connecting via the service account? You need the individual users in case anyone is accessing the instance through an Excel client.
Now let’s get to the good stuff. Each role in SSAS and BISM will have a filter added to the dimension you are trying to restrict access to. SSAS will have a MDX filter while BISM will have a DAX filter. Luckily they both use the same functions to access either CustomData or EffectiveUserName. For my example using MDX for CustomData I used the following filter:
exists([Client].[Client Name].members,strToMember(“[User].[User Name].&[“ + CUSTOMDATA() + “]”), Fact_User_Client”)
I don’t claim to know much about MDX and, in this case, I definitely stepped on the shoulders of those who came before. The two things to notice is the CUSTOMDATA() function and the [User].[User Name] syntax. The first is the function to access the connection string while the second is the hierarchy representing the user name in the DIM_USER table. This filter will grab the value of CustomData in the connection string, search the DIM_USER table for a value, lookup this value in the FACT_USER_CLIENT bridge table and then bring back only the clients associated with that user. If the user is not in the table then the user sees nothing. Keep in mind this in the filter used only for SSAS accessed via PPS. Keep in mind that everything else will use EffectiveUserRole.
The MDX for the EffectiveUserName will be eerily similar. Here is the syntax:
exists([Client].[Client Name].members,strToMember(“[User].[User Name].&[" + USERNAME() + "]“),”Fact User Client”)
The astute reader will notice that the only difference is the USERNAME() function. This function is what grabs the EffectiveUserName value from the connection metadata.
Everything else is the same which makes our job much easier. Both filters are placed in the Data Dimension tab. For additional information concerning SSAS role security see the following link
So now you have your SSAS roles created and your MDX filter included in each role. As long as you have an Active Directory user logging into SharePoint and that user account is added to the DIM_USER dimension and the bridge table as <domainname\username> format then the data they see will be filtered. This will work for any tool accessing the multi-dimensional model. If you are not using PPS or if you are using PPS in SharePoint 2013, then you do not need the CustomData role.
In the next part I will discuss how to implement the same security model in the BISM model. This will allow you to use the security tables to filter on PowerView and PowerPivot using DAX filters.
I recently installed SQL Server 2012 with SSIS onto my Windows 8 laptop. When trying to connect with an administrative account using SSMS I received the following Access Denied error:
This threw me back because I had full admin rights and I’ve never had this error before. I could successfully connect to the database engine, SSAS, and SSRS. There were some postings about changing rights in the COM objects for MSDTC but this didn’t work for me. Digging deeper I found this in the Event Log:
This lead me to think there was some validity to the COM solution. Within component services (you can find the component services in the Administrative Tools section) you should see a component called Microsoft SQL Server Integration Services 11.0.
Right click on the service and select Properties. Click on the Security tab and you should see the following options:
Click the Edit button on the Launch and Activation Permission section. Add your account, or whichever account is accessing SSIS, and make sure it has both the Local Launch and Local Activation selected.
That’s it. You should be able to connect.
Almost every BI project I’ve worked on encounters the same roadblock. The customer demands per user row-level security but Kerberos is a dysfunctional beast that no one can get working. Recently I was given the unique opportunity of addressing this security question head on. The project wasn’t just concerned with accessing SSAS through SSRS or Excel but instead, the project required that all tools (Power Pivot, Power View, SSRS, Excel clients, and Performance Point) be able to connect to their respective relational, multi-dimensional, and\or tabular model via SharePoint. Every tool had to provide row-level security based on a user login without using Kerberos. This meant that even data connections using an unattended service account still had to provide row-level security based off the user account connected to SharePoint.
We all know that SSAS and tabular models use role-based security. Normal procedure is to add the Active Directory user account or AD group to the role and then filter dimension attributes in the role properties. That’s all fine and dandy if the account logging into the instance is the account that is in the role or being impersonated by another account with that ability. But what if all access to the cube is with a service account without Kerberos? This means that all connections are made using a single account. How do you filter data per account when all connections are using the same account? This blog will walk you through how to do it. The information in this blog is a compilation of a few very good blogs which I will link to, but this is also an attempt to consolidate the solution into a single blog which will cover all the scenarios.
It’s too much information for one entry so I’ll break it up into multiple blog postings. The ultimate purpose is to show that you don’t need Kerberos in order to implement dynamic security in your BI\SharePoint projects. I’ll show the basic framework of the solution. How you expand it or what functionality you ultimately add is up to you.
First thing first. We need to create the security model. This part is well-documented. A primary source is a wonderful overview by Nick Barclay
. He also provides links to other blogs which are very helpful in getting started. In addition to Nick there is a blog by Devin Knight which also illustrates these points
At this stage we’re just going to create some tables. I’ll discuss later how these tables will be used. Just know for now that you have to have a means to capture the user name and associate the user name with what the user is allowed to see. Since we’re database folks it seems only fitting that we use a database to capture this relationship. In my model I will filter what the user sees based off a company name.
You only need to create two tables. The first table I’ll call DIM_USER. This table contains all the user names along with a unique surrogate key and any other user attribute information you want. I’ll keep it simple and just have two columns: user_key and user_name. The next table is actually a factless fact table. You can also refer to it as a bridge table. This table will be the intersection between the DIM_USER table and any other dimension tables you want to filter. In this example we will filter on a table called DIM_CLIENT. You could have as easily filtered on a products table or employees table. It only contains two columns. One column is the unique user key from the DIM_USER table and the other column is the unique client key from the DIM_CLIENT table. I’ll call the bridge table FACT_USER_CLIENT. You may want to call it something more generic if it will contain multiple dimension filter keys.
Finally, you will want to create these tables in your SQL Server data warehouse. This would be the same data warehouse which you use for your SSAS DSV and your tabular model. Eventually you will process these tables along with your cube or tabular instance. There is a more complicated, yet robust, method which stores these tables in the relational model which I’ll show later, but we’re getting ahead of ourselves.
The above model, though simple, will be the basis for all of your filtering. Keep in mind that each user accessing your BI environment will need to have an entry in the DIM_USER table. In addition, the accounts must be Active Directory accounts. Luckily, you only need to make these changes in one place and the filter will apply to all the tools. In the next blog post I’ll explain how to setup the roles in SSAS and BISM which will query the tables and apply the filtering through DAX and MDX.
My experience at PASS Summit 2012 was unique this year. I was both a first time attendee and first time speaker. I give a lot of presentations so it wasn’t like I had never spoken before or wasn’t known. In fact, I submitted sessions to PASS last year and was rightfully turned down because I lacked speaking experience. I spent the following year speaking even more at various events so I was delighted (and surprised) when they chose me.
My sessions were scheduled back-to-back at the very end of the conference. No big deal and, actually, this turned out well because I got horribly sick with stomach flu when I arrived in Seattle. I spent the first day of the conference, missing a precon, in my hotel room and I almost went to the hospital. I was seriously considering changing my flight and heading home but I started to feel a little better as the week went on. Still, I never fully recovered during the trip and I’m just now starting to get over the illness. Luckily my presentations were on the last day so I had plenty of time to recover.
Before my presentations I felt good. I was extremely tired. I had been on the road for almost 2 weeks and I was drained. The thrill of speaking at the Summit kept me going (as well as an energy drink shot from @vdamu) and I was excited to have the opportunity to speak. My first presentation – Licensing SQL Server 2012 was crowded for being at the end of the conference. The session was going well until about the middle when I drew a blank on a key slide. This can happen but it was pretty bad this time. I like to be more professional than that. What really shook me was someone in the audience said the worst thing a presenter could ever hear. They said “Now I’m more confused than when I arrived”. It was at that point I thought I would never present again. To me it was an absolute failure and I sincerely apologize to everyone in the room. Here I was presenting at my first PASS Summit and I completely bombed my presentation. I have no excuses.
Luckily I couldn’t think about it too much because I had another presentation right afterwards. This one went much better (I think). It was SQL Server and Healthcare and again it was a big audience with lot’s of interest. Almost everyone in the room was working for a healthcare company. I was impressed with the crowd and had a great time.
So, in summary here are some my thoughts:
- The presenter desktop did not have dual monitor capabilities. This killed me. I had completely relied on Presenter View for my notes and not having this was the equivalent of pulling away a teleprompter at the last second.
- Zoom-It was not installed on the desktop. It was supposed to be but I looked and looked. No Zoom-IT.
- The Summit is a much larger event than I had realized. The event is HUGE and the organizers put on a great show.
- Despite my failure at the first session, I will keep speaking but I am throwing away all my previous presentations. I will be doing completely new material for 2013.
- The belief that presenters submit presentations in September and can’t or shouldn’t change them needs to be thought out better. Many presenters were changing presentations up to right before they presented. That’s ok.
- If you go to a PASS Summit try as hard as you can not to get the stomach flu. Trust me – it’s no fun.
I keep reminding myself that being a speaker puts you front and center for either success or failure. People don’t speak because they fear failing or embarrassing themselves. I both failed and embarrassed myself. I guess the only thing I’m surprised about is that it didn’t happen sooner. Despite botching a session I will continue to speak for as long as someone let’s me and for as long as people continue to attend. Thanks to the selection committee for allowing me to speak. It was a great thrill to represent the Saint Louis SQL community the best I could and I look forward to participating both as a speaker and attendee in many events to come.
I love technology. I love food. I love beer. So, I asked myself while sitting at the computer drinking and eating, why not start an event and see if anyone else likes the same stuff? What was created from this moment of clarity is BIBrews. Here is the official description:
BIBrews is a unique opportunity for like-minded technology experts to informally gather for drinks and food and discuss the pressing questions around Business Intelligence and SQL technical trends and experiences. The content is unrehearsed and open. BIBrews has a strict “no sales” and “no recruiting” policy.
The date is December 13th. I can’t provide endless beer and food so we’re limited to 50 tickets. It would be fantastic if you can make it. If you can’t that’s fine too. I would ask your help in spreading the word. I’m excited about getting this started and I’m looking forward to some good discussions around food and drink. Here is the EventBrite link to register.
There is a lot of fear right now around the new licensing for SQL Server 2012. Change is always difficult but I believe much of the anxiety can be alleviated if you try to understand why things are the way they are. Let’s focus on the primary change to SQL Server 2012 licensing: moving from a processor-based licensing to core-based
Core licensing is the biggest gotcha for most companies. But let’s look closely at the facts. The fact is all of Microsoft’s competitors (Oracle most prominently) were already on a core-based licensing program. Does this make it right for Microsoft to make such an unpopular change? Probably not, but then as a for profit company it would have been stupid of them to maintain the same licensing model. Two primary factors forced Microsoft’s hand. Firstly, processors are getting much denser. I’ve seen 10 core and greater density processors. To pay per processor for a processor with 10 cores would have left a lot of money on the table. I’m not a huge fan of profit for profit sake but Microsoft had to change. The other factor is the increase in virtualization. Cores can be divided out as processors to virtual machines so, again it makes sense to charge on a per core basis because in a virtual environment core = processor. Finally, let’s look at the basic fact of the matter – SQL Server cost\core is still significantly less expensive than Oracle. Even more to the point, if you are only using 4 core processors or 6 core processors running Datacenter edition then you are paying the same price for SQL Server 2012 as you were for 2008. Let’s not forget you are also getting SSIS, SSRS, and SSAS tossed in for good measure.
What I think a lot of customers are fighting right now is the transition between processor licensing to core licensing while, at the same time, juggling licensing on physical servers to virtual servers. Without a doubt this can be some murky territory since we are now talking about things like licensing physical cores vs. licensing virtual cores and also license mobility. Here is my summary take on the matter and I think it is a good baseline to start from:
1. If you plan to consolidate all your SQL Servers onto a single host cluster then look at licensing the physical hosts (especially if the vCPU will outnumber the pCPU)
2. If you spread small SQL Servers (<=2 cores per virtual machine) across your datacenter than license the vCPU
3. If most of your SQL Servers are virtual than buy Software Assurance. Microsoft wants to push you into an EA agreement with SA and this is the only way to get license mobility
Don’t get me wrong. I know I’m simplifying. For example, I’m not bringing up things like CAL based licensing, SPLA licensing, the Cloud, or even what to do with development servers or how to track licensing. My suggestions is always get an outside professional to help with your licensing needs. You can consult with Microsoft and, in my experience (believe it or not), they are willing to try to save you money. But don’t solely rely on them. Get a third party who can spend some time in your organization and really document what’s going on. Trust me – whatever costs you spend up front to get it right will greatly outweigh the cost of getting it wrong.