Multiple Active Result Sets (MARS) causing locks on SQL Server

It has been a while since my last blog. I’ve been a little pre-occupied by my personal situation.

But, the good news is that work continues as usual and it remains interesting enough to share experiences from.

This blog post is about the performance of Business Central on SQL Server. It seems that this subject continues to be connected to me. I tried for a while to shake it off but decided to just give up and let the interesting projects happen. This for sure is an interesting project.

The problem that I am going to describe in this blog can happen to anyone, and it probably does, but only in cases with specifically designed transactions in high volume it will be noticeable and sometimes cause issues.

What is MARS?

As the title of my blog says, MARS stands for Multiple Active Result Sets and it is a native feature of SQL Server.

During a transaction in Microsoft Dynamics 365 Business Central we need a lot of data, and this data needs to be kept in memory of SQL Server until we don’t need it anymore. Before MARS this was done using SQL Cursors.

You can find more documentation on MARS for Business Central on the Microsoft Docs pages.

MARS is great, but it is also expensive when combined with database integrity and long running transactions which is what this blog is about.

SPOILER ALERT: Yes, the problem will be solved when transactions are not long running.

The Problem

When you prepare a dataset in Business Central using AL SQL Server creates a result and keeps that in memory while you work with it. The data is sent from the SQL Server to the Service Tier in batches and when the transaction completes the dataset is disposed.

Keeping these datasets prepared is not cheap and the recommendation is to keep transactions short so SQL Server does not have to keep a lot of datasets active at the same time.

If the duration of a transaction increases the problem can be analysed by looking at the SQL Server Wait Statistics. This will then show a Async_Network_IO record or more with the query that SQL Server is keeping active.

Normally you would not notice this behaviour as a problem until however users need each other’s data.

Business Central is designed in such a way that this seldomly happens. Both from a technical and functional perspective users seldomly try to change the same records in the system. Most locking in Business Central still occurs because of transaction isolation, not because users actually try to change the same sales order or item.

The problem becomes more imminent when SQL Server “on its own” starts to decide that datasets should be bigger than the user is actually needing. This can happen when indexes are missing or if SQL Server for whatever reason decides to ignore an index.

The active result set is as large as the data that SQL Server decides to read from the database, not as large as the data required by the user.

Example

We have a system with 50.000 sales lines and we have a process that determines which of the sales lines can be shipped. We want to filter on promised delivery date and location.

Let’s assume that there is an index on promised delivery date because this is a field with a lot of different values and the developer decided to create a key. The customer is shipping from three different locations and in three warehouses a manager starts a process at the same time reading the sales lines that need to be shipped tomorrow.

The dataset that they need for the process will contain the same records because SQL Server will read the index with the promised delivery date and using that index it will read the other data from the clustered index. From this dataset it will return the records that are required for each location.

Business Central guarantees that the dataset is consistent throughout the transaction and this is where problems may arise.

What if a field is added to the sales line that indicates if the shipment is prepared and we want to set this field during the process.

Each warehouse manager will attempt to change the contents of the dataset while Business Central will try to keep the dataset consistent.

From a functional perspective warehouse managers will never change each other’s records.

The problem escalates when (by accident or design) users do try to change each other’s records. Then the Wait Statistics will also contain a block. This lock only has to be on one row in the database to create a chain of waiting events.

The Fix! (Or not?)

The best fix for the problem is creating short transactions and good indexes. There is a workaround for the problem if this is not an option.

Most often the transaction starts with a relatively large set of data that needs to be processed record-by-record. This dataset will be kept in memory of SQL Server throughout the transaction.

By reading the dataset into a dictionary or temporary table and closing the dataset the pressure on the SQL Server is released and the transaction runs considerably faster up to 60% or more.

This is however a workaround, not the real solution.

To be continued…

This blog entry is getting pretty long and until here it was probably understandable enough for non technical folks.

I will continue to proof the problem and show the workaround in a simple AL Extension.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.