How Much RAM Do I Need For My SSAS Tabular Server?

Much of my client work these days is focused on performance… making slow Analysis Services servers run faster. The good part about these types of engagements is that there’s usually (not always) a good amount of low-hanging fruit and I get to play the hero that swoops in to save the day. The bad part about these types of engagements is that many of the performance issues could have been avoided (or at least mitigated) had the server been given enough memory the start.

When you’re paying $100k+ in licensing costs for Enterprise Edition, the last thing you want to do is skimp on the amount of memory… classic case of penny wise pound foolish.

So, just how much memory is needed for your SSAS tabular server?

Short Answer: as much as you can afford. Relative to the cost of licensing, memory is cheap. Relative to the cost of having a SSAS pro tune the system/workload, memory is cheap. When in doubt buy more than you think you’ll need and move on to more important problems to solve.

Long Answer: it depends! Not everyone has the luxury of buying new hardware each time they need to stand up a new SSAS solution… and not every SSAS solution demands new-shiny top of the line hardware. Often times a company will have perfectly suitable hardware available for reuse – in which case the question changes ever so slightly to: “do I have enough memory? If not, how much more should I buy for this server?”

With an “in-memory” technology like Analysis Services Tabular, the answer may seem obvious: determine the size of your model and buy that much memory. Unfortunately, its not quite that simple.

Raw Data Size

If you’re building the Tabular model on top of an existing source database (ideally a dimensional model), then this is the size of that relational database…or at least the tables that you plan to bring into the model. If you’re unsure how to determine the size of your SQL Server database, start here: StackOverflow – Get Size of All Tables In Database

Compression Ratio

In addition to being an “in-memory” technology, Analysis Services Tabular is also a “column-store” technology which means all the values in a table for a single column are stored together. As a result – and this is especially true for dimensional models – we are able to achieve very high compression ratios. On average, you can expect to see compression ratios anywhere from 10x-100x depending on the model, data types, and value distribution.

What this ultimately means is that your 2 TB data mart will likely only require somewhere between 20 GB of memory (low-end) and 200 GB (high-end) of memory. That’s pretty amazing – but still leaves us with a fairly wide margin of uncertainty. In order to further reduce the level of uncertainty, you will want to take a representative sample from your source database, load it into a model on a server (in your DEV environment) and calculate the compression factor.

Note: make sure the sample is representative and don’t just test the fact tables or tables with the most rows – in many cases, dimension tables (with lots of string-based columns) can have a significant impact on model size.

Growth Factor

This is one is pretty straight forward – but often overlooked. How much is your data set going to grow over time? Is there a retention period or are you keeping all of history?

If the retention policy is fixed (e.g. 5 years), and the growth over time is flat the overall size remains constant over time. However, if data is growing at a rate of 15% year over year then calculating the growth factor depends on the life-span of the solution.

Note: Things can get a bit more complex if you have additional scaling factors such as those found in multi-tenant solutions. 

Workload Factor

This is the amount of additional memory (above and beyond that used to store the fully-loaded data model at rest) needed to accommodate your processing strategy and/or peak query workload.

Let’s start with processing strategy:

ProcessFull: When using the ProcessFull command to process a tabular model, a shadow copy of the model is built in memory, right next to the existing model, on the same server. It isn’t until one of the very last steps of the process, that the old model is dropped from memory and the new model becomes available for queries. This means we need enough memory to hold 2 copies of the same model at the very peak of processing.

ProcessClear+ProcessFull: This is the same as the previous option, only we clear out the existing model from memory before processing so our memory requirement is reduced to 1x because even at the peak we still only have a single copy of the model in memory.

The benefit of “ProcessFull” is that the (existing) model remains available for queries while the shadow copy is being processed. However, if you’re processing after business hours (common in the majority of cases) and don’t need to have the model available for queries, then using a “ProcessClear + ProcessFull” can greatly reduce your memory requirements.

Note: There are several other flavors of processing strategies (e.g. Incremental) requiring less memory than the 2x needed by ProcessFull… but they’re a bit more complicated to implement and usually require a bit more work upfront in the data model and ETL. If you’re model is small and a FullProcess is completing well within your time window, its best not to bother. The incremental is really only for short-processing windows and/or large models.

The requirement for peak query workload depends heavily on writing efficient DAX (which often depends on having the right data model and experienced DAX developers). Poorly written DAX can result in substantial “memory materialization” – which is when the engine needs to build an intermediate result set before applying some final calculation logic in order to resolve a DAX query. This intermediate result set is NOT compressed… it happens outside of the storage engine… so if it contains a lot of rows/columns, it can consume a substantial amount of memory.

Note: For more information on memory materialization, see Chapter 13 of The Definitive Guide To DAX.

Here’s an example from earlier this year…


What you’re looking at ^^^ is a single DAX query that was run against an 8 GB tabular model (no other activity on the server at the time)… see that black line? That’s the query eating all the available memory on the server. Once this query consumes ~50 GB of memory and hits the hard memory limit (red line @ 90 GB) it fails with an out-of-memory exception.

This is an extreme example to be sure – and I’m not saying you need to prepare for that scenario when purchasing hardware… because that’s not a hardware issue (it’s a DAX tuning issue). What I am saying is that you need to be aware that “materialization” happens in many queries/calculations and most of the time it’s not a big deal. But from a memory requirement standpoint, even a small amount of materialization can become non-negligible across a large user base.

Final Thoughts

  • never buy a new server with less than 128 GB of memory. Period. Doing so would be the equivalent of purchasing a $1k+ suit and matching it with a pair of crocs.
  • when re-purposing an existing server – don’t forget you have the option to upgrade/add more memory
  • tabular compression is awesome but hard to estimate… best to take a measurement and extrapolate
  • processing (depending on the approach) can require a substantial amount of memory
  • query materialization can become a problem if you don’t have the right model and/or write inefficient DAX


I help make slow SSAS servers run faster. Need immediate pain relief? Let’s have a conversation.

16 replies on “How Much RAM Do I Need For My SSAS Tabular Server?”

If the raw data is already in columnstore tables, should we assume we won’t get much additional compression from the SSAS model?


Fast forward two years to 2018 and this article still kicks ass. I want to thank you for writing it. My company is just starting to experiment with SSAS Tabular and Power BI. No one here has experience with Tabular and my system admin team has been tasked with making sure performance is never an issue. Just a week ago we had a long debate on how to measure RAM and which type of cloud instance (hardware) we should be going with. This post helps to clear the air a bit. Looking forward to browsing the rest of your site and learning more! I’m also thinking we need to make the Definitive Guide to DAX required reading for our BI team. 😉


Hi John – thanks for the kind words.

The Definitive Guide to DAX is an outstanding book. I’ve read it cover-to-cover several times now and still learn something new each time.

Best of luck to you and your team!



Hi Bill,

When a DAX query is run, the tabular cube is brought in to memory.

When different queries are run by different users, does it bring in multiple copies of cube to memory?



Hi Rekha,

No. The tabular model always resides in memory (with very few exceptions).

The additional memory that can be required by a query workload – that I’m referring to in the post above – is primarily attributed to certain scenarios where in order to resolve a query, SSAS will need to make a temporary copy of the data. The temporary copy is only a small subset of the data in the model, but (and this is the major concern) is uncompressed and can therefore consume a very large amount of memory.


Hi Bill,
Great post, thank you for the information.

Could you please help me on the below item:
Hardware configuration for an SSAS tabular model server to process 600 Million rows fact tables.

If possible kindly let me know the number processors (CPUs) and RAM which is optimal for my requirement.



Hi Rajeesh,

We can’t determine the amount of memory just based on the number of rows in the fact tables. With tabular models, the main driver of model size is the number of unique values. So for example, a 1 column fact table with 600mm rows but only 100k unique values will be very small compared to a 1 column fact table with 600mm rows and 200mm unique values.

My advice would be to test and extrapolate. Create a tabular model and load only 10mm rows and see how much memory it consumes. Do it again with 50mm rows and then extrapolate out to 600mm. Don’t forget to include additional memory for workload (i.e. processing & queries) and growth over time.

For CPU, that’s a bit more difficult to estimate. What is the expected concurrency? What is the SLA for query response time? What is the processing pattern (full, incremental, other) and frequency (daily, hourly, other)? What is your budget (CPU drives licensing cost)?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s