MARS Issues Part II | Close those Datasets!

This is a continued blog from the one I wrote earlier today about Multiple Active Result Sets (MARS) causing locks on SQL Server.

In this blog I will show what happens on SQL Server if you keep result sets open during long transactions and we will discuss a possible workaround that I need your feedback on.

How to reproduce this?

Together with the project team we’ve reproduced this issue on the latest versions of Business Central, also with a relatively small database.

The examples shown in this blog are from a Business Central 14 system running on a SQL Azure database scaled as General Purpose – Serverless: Gen5, 1 vCore.

The SQL Statements to display the contents of the Active Result Sets are the IP of someone else, but I was given permission to share them on Yammer where most Business Central partners have access.

The database size used here is 77GB with Page Compression and the G/L Entry table has 14.641.928 (14.5 million) rows.

The code examples can be found on my GitHub.

The Loop

The code that we will execute is the following:

    GLAcc: Record "G/L Account";
    GLEnt: Record "G/L Entry";
    GLEnt.SetRange(Amount, 100, 110);
        GLAcc.SetRange("No.", GLEnt."G/L Account No.");
    until GLEnt.Next() = 0;

We create a loop searching for G/L Entries and for each entry we read the G/L Account and pauze for 1 second.

Imagine in real life, this is a business process. For example calculating Item Values for each Item Ledger Entry posted.

The Result

If you look at SQL Server you will see that it is keeping the dataset in memory for the Service tier to fetch the next 50 records.

SQL Server will keep the dataset in memory and try to manage it as long as the transaction takes.

The “Fix”

I am very reluctant to call this a fix but here is what we tried:

    GLAcc: Record "G/L Account";
    GLEnt: Record "G/L Entry";
    GLEntTemp: Record "G/L Entry" temporary;
    GLEnt.SetRange(Amount, 100, 110);
        GLEntTemp := GLEnt;
    until GLEnt.Next() = 0;

        GLAcc.SetRange("No.", GLEnt."G/L Account No.");
    until GLEntTemp.Next() = 0;

You can see that we put the first result set in a buffer table, in memory of the service tier. This will allow SQL Server to close the result set and the client to continue the process.

The Performance WIN!

In the tests that we’ve executed we noticed the process to be up until 2.5 or 3 times faster just by closing the dataset and reduce the pressure on SQL Server.

Business Central SAAS

This is what the difference is between MARS and Temporary Tables on Business Central SAAS

How about the Service Tier?

We debated how this would move the problem from the SQL Server to the Service Tier.

Based on the blog from Erik Haugaard we also tried a List object and a Dictionary object. This was however, slower than using a temporary table…

And what about the locking?

That will be discussed in the third and final part of this blog series…

Multiple Active Result Sets (MARS) causing locks on SQL Server

It has been a while since my last blog. I’ve been a little pre-occupied by my personal situation.

But, the good news is that work continues as usual and it remains interesting enough to share experiences from.

This blog post is about the performance of Business Central on SQL Server. It seems that this subject continues to be connected to me. I tried for a while to shake it off but decided to just give up and let the interesting projects happen. This for sure is an interesting project.

The problem that I am going to describe in this blog can happen to anyone, and it probably does, but only in cases with specifically designed transactions in high volume it will be noticeable and sometimes cause issues.

What is MARS?

As the title of my blog says, MARS stands for Multiple Active Result Sets and it is a native feature of SQL Server.

During a transaction in Microsoft Dynamics 365 Business Central we need a lot of data, and this data needs to be kept in memory of SQL Server until we don’t need it anymore. Before MARS this was done using SQL Cursors.

You can find more documentation on MARS for Business Central on the Microsoft Docs pages.

MARS is great, but it is also expensive when combined with database integrity and long running transactions which is what this blog is about.

SPOILER ALERT: Yes, the problem will be solved when transactions are not long running.

The Problem

When you prepare a dataset in Business Central using AL SQL Server creates a result and keeps that in memory while you work with it. The data is sent from the SQL Server to the Service Tier in batches and when the transaction completes the dataset is disposed.

Keeping these datasets prepared is not cheap and the recommendation is to keep transactions short so SQL Server does not have to keep a lot of datasets active at the same time.

If the duration of a transaction increases the problem can be analysed by looking at the SQL Server Wait Statistics. This will then show a Async_Network_IO record or more with the query that SQL Server is keeping active.

Normally you would not notice this behaviour as a problem until however users need each other’s data.

Business Central is designed in such a way that this seldomly happens. Both from a technical and functional perspective users seldomly try to change the same records in the system. Most locking in Business Central still occurs because of transaction isolation, not because users actually try to change the same sales order or item.

The problem becomes more imminent when SQL Server “on its own” starts to decide that datasets should be bigger than the user is actually needing. This can happen when indexes are missing or if SQL Server for whatever reason decides to ignore an index.

The active result set is as large as the data that SQL Server decides to read from the database, not as large as the data required by the user.


We have a system with 50.000 sales lines and we have a process that determines which of the sales lines can be shipped. We want to filter on promised delivery date and location.

Let’s assume that there is an index on promised delivery date because this is a field with a lot of different values and the developer decided to create a key. The customer is shipping from three different locations and in three warehouses a manager starts a process at the same time reading the sales lines that need to be shipped tomorrow.

The dataset that they need for the process will contain the same records because SQL Server will read the index with the promised delivery date and using that index it will read the other data from the clustered index. From this dataset it will return the records that are required for each location.

Business Central guarantees that the dataset is consistent throughout the transaction and this is where problems may arise.

What if a field is added to the sales line that indicates if the shipment is prepared and we want to set this field during the process.

Each warehouse manager will attempt to change the contents of the dataset while Business Central will try to keep the dataset consistent.

From a functional perspective warehouse managers will never change each other’s records.

The problem escalates when (by accident or design) users do try to change each other’s records. Then the Wait Statistics will also contain a block. This lock only has to be on one row in the database to create a chain of waiting events.

The Fix! (Or not?)

The best fix for the problem is creating short transactions and good indexes. There is a workaround for the problem if this is not an option.

Most often the transaction starts with a relatively large set of data that needs to be processed record-by-record. This dataset will be kept in memory of SQL Server throughout the transaction.

By reading the dataset into a dictionary or temporary table and closing the dataset the pressure on the SQL Server is released and the transaction runs considerably faster up to 60% or more.

This is however a workaround, not the real solution.

To be continued…

This blog entry is getting pretty long and until here it was probably understandable enough for non technical folks.

I will continue to proof the problem and show the workaround in a simple AL Extension.