• Karthik

Monitoring Analysis services Model

Updated: Dec 5, 2021

Quick Intro to AS Architecture

As a solution architect it is critically important how you decide what Analysis storage technology you choose - and this depends on various factors like latency, retrieval speed, complexity involving measures & calculated columns and lastly how and where your data is stored. As you see in the above diagram there are 3 storage technologies VertiPaq, Direct query over Relational and Direct query over AS. Main difference between Vertipaq and Direct Query is that in Vertipaq data is copied once and

the engine never again talks to original source.

An important take-away here is in DQ no data is stored but MDX/DAX queries are sent to remote DS in real-time, and this architectural decision is very important and reversing may prove to be costly. Make it a point to avoid cross-island queries and have clarity on data latency, retrieval and storage architecture requirements. Remember AS is just a semantic model sitting on top of your Data Lake or DW and serves you reporting needs. Also based on your choice there will be 1 Formula engine and multiple storage engines( typical use case for Composite models where you have mixed architecture). Check out more on DQ-AS here

Monitoring AS models - different options

Here is the list of tools or options I would recommend :-

  1. Integration with Azure Log Analytics ( rec for prod )

  2. Diagnostics logs of AAS (rec for prod )

  3. SSAS Activity Monitor ( rec for test/dev )

  4. Using Trace events, DMV's ,Profiler and Performance Monitor ( rec for test/dev)

I have tried the last 2 options and both met my requirements. SSAS Activity Monitor is an open source utility to monitor the sessions, transactions, connections on an AS instance. It works for both the Tabular and Multidimensional model. It has an excellent UI and is easy to use, you can download and try it here

Analysis Services Trace Events - You can follow the activity of an instance by capturing and then analyzing the trace events generated by the instance. Trace events can be started and captured by using SQL Server Profiler or started from an XMLA command as SQL Server Extended Events. I would use DMV's as I am not that good at XMLA scripting ( however you can try these in ssms(connect to as model) by going to template browser( hit ctrl+alt+T) and check the in-built templates). More info on XMLA scripts is found here - . Some frequently used MDX scripts we use are as follows:-

// this query giveas all active sessions, users and duration
select * from $system.discover_sessions order by session_last_command_start_time desc
//query to retrieve all tables in a model
select* from $system.discover_object_memory_usage
//query to get all columns
select* from $system.tmschema_columns
//query to get all calculated columns
select* from $system.tmschema_columns where [type] = 2
// query to get all measures
select * from $system.tmschema_measures
//query to get all dependencies
select * from $system.discover_calc_dependency
//get unique row counts for all tables and columns
select * from $system.discover_storage_tables order by rows_count desc
//Query to get all the roles, associated permissions and role memberships defined in the model
select * from $system.tmschema_roles
select * from $system.tmschema_table_permissions
select * from $system.tmschema_role_memberships

Performance Monitor - Another important tool ( free with Windows) you can monitor remote and local instances of Analysis Services or SQL Server.

To see the description of any counter that can be used with SQL Server Analysis Services, in Performance Monitor, open the Add Counters dialog box, select a performance object, and then click Show Description. See below important counters i use:-

Concluding Remarks

* Research and try and build POC before deciding on the Analysis services storage technologies

* Use the right monitoring tool that is light-weight and yet give you right stats in real time. Again use these in sandbox and test environments before promoting them.

* Another interesting way to build your own monitoring utility is to utilize the Job Graph events , check out this excellent article by chris webb

Love to hear your comments...Happy blogging!!

36 views0 comments