MARS Issues Part II | Close those Datasets!

This is a continued blog from the one I wrote earlier today about Multiple Active Result Sets (MARS) causing locks on SQL Server.

In this blog I will show what happens on SQL Server if you keep result sets open during long transactions and we will discuss a possible workaround that I need your feedback on.

How to reproduce this?

Together with the project team we’ve reproduced this issue on the latest versions of Business Central, also with a relatively small database.

The examples shown in this blog are from a Business Central 14 system running on a SQL Azure database scaled as General Purpose – Serverless: Gen5, 1 vCore.

The SQL Statements to display the contents of the Active Result Sets are the IP of someone else, but I was given permission to share them on Yammer where most Business Central partners have access.

The database size used here is 77GB with Page Compression and the G/L Entry table has 14.641.928 (14.5 million) rows.

The code examples can be found on my GitHub.

The Loop

The code that we will execute is the following:

var
    GLAcc: Record "G/L Account";
    GLEnt: Record "G/L Entry";
begin
    GLEnt.SetRange(Amount, 100, 110);
    GLEnt.FindSet();
    repeat
        GLAcc.SetRange("No.", GLEnt."G/L Account No.");
        GLAcc.FindSet();
        Sleep(1000);
    until GLEnt.Next() = 0;
    Message('Finished');
end;

We create a loop searching for G/L Entries and for each entry we read the G/L Account and pauze for 1 second.

Imagine in real life, this is a business process. For example calculating Item Values for each Item Ledger Entry posted.

The Result

If you look at SQL Server you will see that it is keeping the dataset in memory for the Service tier to fetch the next 50 records.

SQL Server will keep the dataset in memory and try to manage it as long as the transaction takes.

The “Fix”

I am very reluctant to call this a fix but here is what we tried:

var
    GLAcc: Record "G/L Account";
    GLEnt: Record "G/L Entry";
    GLEntTemp: Record "G/L Entry" temporary;
begin
    GLEnt.SetRange(Amount, 100, 110);
    GLEnt.FindSet();
    repeat
        GLEntTemp := GLEnt;
        GLEntTemp.Insert();
    until GLEnt.Next() = 0;

    GLEntTemp.FindSet();
    repeat
        GLAcc.SetRange("No.", GLEnt."G/L Account No.");
        GLAcc.FindSet();
        Sleep(1000);
    until GLEntTemp.Next() = 0;
    Message('Finished');
end;

You can see that we put the first result set in a buffer table, in memory of the service tier. This will allow SQL Server to close the result set and the client to continue the process.

The Performance WIN!

In the tests that we’ve executed we noticed the process to be up until 2.5 or 3 times faster just by closing the dataset and reduce the pressure on SQL Server.

Business Central SAAS

This is what the difference is between MARS and Temporary Tables on Business Central SAAS

How about the Service Tier?

We debated how this would move the problem from the SQL Server to the Service Tier.

Based on the blog from Erik Haugaard we also tried a List object and a Dictionary object. This was however, slower than using a temporary table…

And what about the locking?

That will be discussed in the third and final part of this blog series…

Multiple Active Result Sets (MARS) causing locks on SQL Server

It has been a while since my last blog. I’ve been a little pre-occupied by my personal situation.

But, the good news is that work continues as usual and it remains interesting enough to share experiences from.

This blog post is about the performance of Business Central on SQL Server. It seems that this subject continues to be connected to me. I tried for a while to shake it off but decided to just give up and let the interesting projects happen. This for sure is an interesting project.

The problem that I am going to describe in this blog can happen to anyone, and it probably does, but only in cases with specifically designed transactions in high volume it will be noticeable and sometimes cause issues.

What is MARS?

As the title of my blog says, MARS stands for Multiple Active Result Sets and it is a native feature of SQL Server.

During a transaction in Microsoft Dynamics 365 Business Central we need a lot of data, and this data needs to be kept in memory of SQL Server until we don’t need it anymore. Before MARS this was done using SQL Cursors.

You can find more documentation on MARS for Business Central on the Microsoft Docs pages.

MARS is great, but it is also expensive when combined with database integrity and long running transactions which is what this blog is about.

SPOILER ALERT: Yes, the problem will be solved when transactions are not long running.

The Problem

When you prepare a dataset in Business Central using AL SQL Server creates a result and keeps that in memory while you work with it. The data is sent from the SQL Server to the Service Tier in batches and when the transaction completes the dataset is disposed.

Keeping these datasets prepared is not cheap and the recommendation is to keep transactions short so SQL Server does not have to keep a lot of datasets active at the same time.

If the duration of a transaction increases the problem can be analysed by looking at the SQL Server Wait Statistics. This will then show a Async_Network_IO record or more with the query that SQL Server is keeping active.

Normally you would not notice this behaviour as a problem until however users need each other’s data.

Business Central is designed in such a way that this seldomly happens. Both from a technical and functional perspective users seldomly try to change the same records in the system. Most locking in Business Central still occurs because of transaction isolation, not because users actually try to change the same sales order or item.

The problem becomes more imminent when SQL Server “on its own” starts to decide that datasets should be bigger than the user is actually needing. This can happen when indexes are missing or if SQL Server for whatever reason decides to ignore an index.

The active result set is as large as the data that SQL Server decides to read from the database, not as large as the data required by the user.

Example

We have a system with 50.000 sales lines and we have a process that determines which of the sales lines can be shipped. We want to filter on promised delivery date and location.

Let’s assume that there is an index on promised delivery date because this is a field with a lot of different values and the developer decided to create a key. The customer is shipping from three different locations and in three warehouses a manager starts a process at the same time reading the sales lines that need to be shipped tomorrow.

The dataset that they need for the process will contain the same records because SQL Server will read the index with the promised delivery date and using that index it will read the other data from the clustered index. From this dataset it will return the records that are required for each location.

Business Central guarantees that the dataset is consistent throughout the transaction and this is where problems may arise.

What if a field is added to the sales line that indicates if the shipment is prepared and we want to set this field during the process.

Each warehouse manager will attempt to change the contents of the dataset while Business Central will try to keep the dataset consistent.

From a functional perspective warehouse managers will never change each other’s records.

The problem escalates when (by accident or design) users do try to change each other’s records. Then the Wait Statistics will also contain a block. This lock only has to be on one row in the database to create a chain of waiting events.

The Fix! (Or not?)

The best fix for the problem is creating short transactions and good indexes. There is a workaround for the problem if this is not an option.

Most often the transaction starts with a relatively large set of data that needs to be processed record-by-record. This dataset will be kept in memory of SQL Server throughout the transaction.

By reading the dataset into a dictionary or temporary table and closing the dataset the pressure on the SQL Server is released and the transaction runs considerably faster up to 60% or more.

This is however a workaround, not the real solution.

To be continued…

This blog entry is getting pretty long and until here it was probably understandable enough for non technical folks.

I will continue to proof the problem and show the workaround in a simple AL Extension.

Tip #70 | Get ID from Table

It has been a bit quiet from me for a while. My last blogpost was about two months ago. I’ve not been programming that much and I am evaluating how to move forward with my (freelance) carreer a bit. Don’t take me wrong, I am as busy as I want to be, but that currently means less than 20 hours of working per week and spending much more time on me and my family.

This week I was goofing a bit with archiving data to Cosmos for a project when I needed to have the ID of a table from a variable. This has always been a challenge in Navision. Just Google it and you’ll see.

Since AL leverages more of the fact that our programming language is based on C# I found out, more or less by accident, that this actually works:

    trigger OnOpenPage()
    var
        CompInfo: Record "Company Information";
    begin
        Message(Format(CompInfo.RecordId.TableNo));
    end;

It actually shows this:

How Cool is that!

Filtering and Grouping KQL by Hour of Day or Weekday

EDIT! A friend of mine pointed me to the use of the function HourOfDay() which is native to KQL. I will leave the post as is, but you know now that this exists. Thanks Morten.

Initially you will most likely use KQL for ad-hoc analysis if a customer calls you in panic that a system is slow. But it is much better to use the telemetry to prevent issues and predict that systems get slower and fix issues before users get their eyes all wet.

For this it is important to be able to do comparisons, and in most businesses you can compare business days (monday vs. tuesday) or weekdays (this monday vs. last monday) or hours (10am vs. 4pm).

This morning I logged in to a customers system and compared the last 4 weeks per hour.

You can immediately see that the system is not used on saturday or sunday and that the system gets busier during the day before end of day.

Also you can see the effect of Christmas, but that is irrelevant for this post. 😉

I am open to suggestions but this is what I came up with to render a chart showing the busiest hours of the day.

Hour

Event
| extend hour = tostring(toint(substring(tostring(bin(TimeGenerated, 1h)),11,2)) + 1)
| extend server = strcat(extract("Server instance:\s{1,}([^\ ]+)\s", 1, ParameterXml))
| extend object = strcat(extract("AppObjectType:\s{1,}([^\ ]+)\s", 1, ParameterXml), extract("AppObjectId:\s{1,}([^\ ]+)\s", 1, ParameterXml))
| extend executionTime = toint(executionTime = extract("Execution time:\s{1,}([^\ ]+)\s", 1, ParameterXml))
| extend query = strcat(extract("SELECT\s.FROM\s.WHERE\s.", 0, ParameterXml), extract("DELETE\s.FROM\s.WHERE\s.", 0, ParameterXml), extract("UPDATE\s.SET\s.WHERE\s.*", 0, ParameterXml))
| where ParameterXml contains "Message: Long running SQL statement"
| summarize sum(executionTime) by hour
| render piechart

Note that this client is in CET so I have to manually convert for UTC.

Also, for some reason the Hour needs to be a string to be able to render as a label for the piechart.

Weekday

The WeekDay is easier if you accept that it is zero-based where sunday is zero.

Event
| extend weekday = substring(tostring(dayofweek(TimeGenerated)), 0, 1)
| extend server = strcat(extract("Server instance:\s{1,}([^\ ]+)\s", 1, ParameterXml))
| extend object = strcat(extract("AppObjectType:\s{1,}([^\ ]+)\s", 1, ParameterXml), extract("AppObjectId:\s{1,}([^\ ]+)\s", 1, ParameterXml))
| extend executionTime = toint(executionTime = extract("Execution time:\s{1,}([^\ ]+)\s", 1, ParameterXml))
| extend query = strcat(extract("SELECT\s.FROM\s.WHERE\s.", 0, ParameterXml), extract("DELETE\s.FROM\s.WHERE\s.", 0, ParameterXml), extract("UPDATE\s.SET\s.WHERE\s.*", 0, ParameterXml))
| where ParameterXml contains "Message: Long running SQL statement"
| summarize sum(executionTime) by weekday
| render piechart

Review | Automated Testing in Microsoft Dynamics 365 Business Central

Time flies if you are having fun. It seems like yesterday that Luc van Vugt had published his first book about Automated Testing and recently the Second Edition has been released.

Everyone in our community knows, or should know, that having the Automated Testing book from Luc is mandatory. Luc is the authority when it comes to this subject.

But the question is, why should you buy this second edition?

Well, because a lot has changed since the first version. And this book is twice as thick!

Why has so much changed?

Testing as such has not, so why is this book so much thicker and better than the previous edition?

The main reason is that our world has changed and proffessionalized. Integration of Business Central with tools like CI/CD has improved drastically.

My favorite chapter?

Luc has added a chapter on how to write testable code. In my experience this is the most important chapter, as a developer.

If a developer writes code that is hard to test, has dependencies, or is too large the testing becomes incredibly difficult.

Since our community has a lot of legacy code we also have a lot of code that was never designed to be tested automatically.

Refactoring has always been hard in our community which drives on open code. With the move to an extension model this is no longer an excuse and now is the time to start refactoring code to improve testability.

Good job Luc! And congratulations on this achievement.

Get the book here.

Best Practices for (Per Tenant) Extensions | Protect Yourself

Time to get back to Best Practices for Per Tenant Extensions.

This time we are going to discuss something that in my opinion should also be implemented by ISV’s in their AppSource solutions.

By default, AL Objects are extensible. This means that everyone can take a dependency on your extension and therefor Microsoft does not allow you to refactor any code once it lands on AppSource.

The solution is simple, but since it’s manual it requires extra discipline.

My recomendation would be to, by default, make all tables, pages and codeunits extensible = false and access = internal.

This means others cannot reuse your code and therefore you can change signatures of procedures, rename them and refactor your code.

Examples

table 50100 MyTable
{    
    DataClassification = ToBeClassified;    
    Extensible = false;
    
    fields    
    {        
        field(1; MyField; Integer) { }
    }        
    
    internal procedure MyProcedure()    
    begin        
        Message('Foo');    
    end;
}
page 50100 MyPage
{
    PageType = Card;
    ApplicationArea = All;
    UsageCategory = Administration;
    SourceTable = MyTable;
    Extensible = false;

    layout
    {
        area(Content)
        {
            group(GroupName)
            {
                field(Name; Rec.MyField) { }
            }
        }
    }

}
codeunit 50100 MyCodeunit
{
    Access = Internal;
    
    trigger OnRun()
    begin
        
    end;
    
    var
        myInt: Integer;
}

If you are an ISV, your reselling and implementationpartners can request an object to be opened up if they have a business reason for it.

Read More

If you want to read more about my Per Tenant Best Practices you can read previous posts.

Why best practices for Per Tenant Extensions?

One Per Tenant Extension to ‘Rule Them All’

Organizing your “big PTE” in Micro Services

PreProcessorSymbols & Per Tenant Extension Best Practices

Extending the same object twice in one Extension

Do you have feedback?

I love it when people have feedback and enjoy answering questions.

What I don’t like is polarization and social media cancel culture. Everybody has the right to their opinion and eveyone has the right to make mistakes and learn from it. Me included.

If you have to assign an advisory board, would you have a group of people with the same option that just say “yes” or would you like to be challenged with different opinions?

Again, with love and enjoy your “Sinterklaas” weekend

Marije

Oh, TempBlob! What did you do?

The alternative title for this blog post would have been something like… TempBlob, why did you waste my time! Or waste thousands of hours accross our community.

The topics of my blogs tend to be about what happens in the freelance projects I work on, and last week this was two extensions that have a substantial size (1000+ objects) that had to be BC19 Compatible.

BC19 is the first version of Business Central where warnings about obsoleted objects became errors. The most commonly used are TempBlob and Language.

Language

Language is easy. Functions that used to exist in the table moved to a codeunit and the codeunit has the same name.

In both projects doing a Find/Replace on Language: Record with Language: Codeunit was enough.

Unfortunately for those who use Hungarian Notation, You also have to change your variable names.

TempBlob

This one is a lot more difficult. Not because the Codeunit has a Space in the name, but because the nature of the “Blob” field.

In Saas, the Blob field is the only way to create streams and it requires quite a bit of coding around to work with the obsoleted troubles.

The “Fix”

In both projects I fixed it by creating a new table called “TLA TempBlob” where TLA stands for the Three Letter Abbreviation of the partner on AppSource.

This new table looks like this

table 50500 "PTE Blob"
{
    TableType = Temporary;
    DataClassification = ToBeClassified;

fields

{
    field(1; "Primary Key"; Code[1]) { }

    field(2; Blob; Blob) { }
}

keys
{
    key(Key1; "Primary Key")  { Clustered = true; }
}

procedure MoreTextLines(): Boolean
begin

    IF NOT ReadLinesInitialized THEN
        StartReadingTextLines(TEXTENCODING::Windows);

    EXIT(NOT GlobalInStream.EOS);
end;

procedure ReadTextLine(): Text
var
    ContentLine: Text;
begin
    IF NOT MoreTextLines THEN
        EXIT('');

    GlobalInStream.READTEXT(ContentLine);
    EXIT(ContentLine);
end;

procedure ReadAsText(LineSeparator: Text; Encoding: Textencoding) Content: Text

var
    InStream: InStream;
    ContentLine: Text;

begin
    Blob.CREATEINSTREAM(InStream, Encoding);

    InStream.READTEXT(Content);

    WHILE NOT InStream.EOS DO BEGIN
        InStream.READTEXT(ContentLine);
        Content += LineSeparator + ContentLine;
    END;
end;

procedure WriteAsText(Content: Text; Encoding: Textencoding)
var
    OutStr: OutStream;
begin
    CLEAR(Blob);
    IF Content = '' THEN
        EXIT;

    Blob.CREATEOUTSTREAM(OutStr, Encoding);
    OutStr.WRITETEXT(Content);
end;

procedure StartReadingTextLines(Encoding: TextEncoding)

begin
    Blob.CREATEINSTREAM(GlobalInStream, Encoding);
    ReadLinesInitialized := TRUE;
end;

var

    GlobalInStream: InStream;
    GlobalOutStream: OutStream;
    ReadLinesInitialized: Boolean;
    WriteLinesInitialized: Boolean;
}

I know that I am not the only one with this solution. All accross AppSource each App has it’s own new TempBlob table, simply because a Codeunit does not allow the use of the Blob fieldtype as variabletype.

TableType = Temporary

The reason Microsoft obsoleted TempBlob is to prevent people to declare this object without the Temporary tag.

When this happened TableType Temporary did not yet exist.

Now this is the case.

Other Changes

There is one other thing I ran into that I wanted to share.

On a lot of pages, Name 2 and Description 2 are added by Microsoft InVisible. They also removed a few fields.

Removing meant I ran into an issue with AddAfter. This was solved by changing to AddLast, following the Per Tenant Best Practices that you can find elsewhere on this website.

Thank you, with love…

Marije

“GENERIC METHOD” | Brilliant or Anti Pattern?

I’ve been in doubt if I should write this post or not. Read it fast as it may disapear if I regret writing it.

Ever since I started working with Navision, almost 25 years ago, I’ve had my own little stuborn ideas. These ideas got me where I am today, but just as often they got me in big trouble and caused personal relationships to shatter right before my eyes.

I wrote a lot about the legacy of Navision behind Business Central, the good, the bad and the ugly.

Today I want to talk about events, and why they are bad.

Wait!

Events are bad? But… events are the whole backbone of the extensibility model. How can they be bad.

In order to understand that we first need to talk about Interfaces and Extensible Enums.

Progress in any development language or software framework can cause what was good yesterday to be something to avoid tomorrow, or even today.

Vedbaek, a few years ago…

Let’s rewind the clock a few years. Imagine C/Side with Hooks and Delta Files.

If this does not ring a bell, you are probably not old enough to understand this part of the blog and you can skip to the next paragraph.

A few years ago, and I’ve written many articles about this, Microsoft choose one of the SMB ERP Systems to go to the cloud. They only wanted to invest in one, not in three. Dynamics NAV was the choosen one.

The cloud needed a more mature extensibility model, and NAV had Delta Files and Hooks. This was choosen as the bases for the extension model we have today.

Part of this model was built in C/Side, which ended up being what we now know as “events”. Other parts were built outside C/Side and are what we now know as Table Extensions and Page Extensions. The first version did not offer an IDE for these objects and were tidous to work with.

What happened after that is history. The model grew into a new compiler that is compatible with Visual Studio Code and half a million events were added to a 30 year old application.

1.800 Apps in Microsoft AppSource are built on this model and used everyday.

So why is that bad?

It’s not bad, per se. But it is very tidious and it makes the framework very difficult to work with for junior developers.

Finding your way in thousands and thousands of events require very thourough knowledge of the old Navision application. Since there are only “few” of those it puts high constraignts on the growth of our ecosystem and make salaries for experienced developers go skyrocket.

Events have in between each other no relationship whatsoever. A few weeks ago I was talking to a friend who tried to enhance the Item Tracking module and he had to subscribe to 30+ events accross the system to complete a task.

In another case I was consulting a group of freelancers. They complained that they could never go to AppSource because they had heavily customized the Sales and Purchase posting processes.

My response, as a joke, was that Microsoft has built in the Generic Method pattern to override these posting routines with your own copy. The reason for making it a joke is that I thought (naive a girl as I am) that no sane developer would ever consider doing this.

Their response, to my surprise, was just a “thank you for this great suggestion, we will implement this”.

A third real life story, was a small consultation I did for a partner in Germany that offers a payment add-on that is very succesful. They are on AppSource and started to find out that aparently their App is not compatible with all of the other Apps. In other words, other Apps break their solution.

The reason for this is the “Handled Pattern” which is part of the Generic Method pattern which by itself is also an Anti Pattern but the only solution we had until the introduction of interfaces.

If two Apps subscribe to the same event and one handles it before the other get’s chance… the system fails.

And when someone decides to “override” posting routines the events in the original code are skipped.

Interfaces to the rescue

In my humble opinion, events should be marked as “obsolete pending” in favour of interfaces.

For Example: Sales Post

In Business Central, a Sales Document can have a few types such as quote, order, invoice or credit memo. There are a few more and partners can add new ones.

In my opinion a Sales Document Type should have methods that are implemented as part of an interface, such as “Release”, “Calculate Discount”, “Print” or whatever. Anything that is an action on a page.

If a partner really wants to override how the system works (which is bad enough to start with) they are then required to make their own Document Types. This shows a clear intention that they want the system to behave differently and it also allows other apps to auto-detect if they are compatible with this new implementation.

A Payment System, like the German one, should also replace the Payment System from Microsoft if they think they can do a better job.

Someone making a new Sales Document Type can still call the orriginal Payment Interface in the right places and allow other Payment systems to run nontheless.

Keep on dreaming Marije

A girl can dream right? I fully understand that the above situation will never happen.

Business Central was built on Navision and is it’s own legacy system and events, once our favorite, is now something from the past that is replaced with a better alternative.

Microsoft can never replace the events in the Base App with proper interfaces. The code is simply to old and events are all over the place.

Another problem is that an event publisher is married into it’s caller object. I remember in the very first discussions I had with Thomas Hejlsberg I suggested that an event should be able to move around when refactoring requires without breaking it’s subscribers. Unfortunately this never got implemented.

What about ISV’s?

Microsoft is always ahead of the game when compared to ISV’s. In the last releases of PrintVis we released a total of four interfaces that all serve a functional purpose. If a user or a partner of PrintVis is unhappy with how the interface behaves, they can imlement their own version.

If you have read my thoughts on best practices for Per Tenant Extensions you should have also seen that I don’t recommend anyone other than Microsoft or an ISV to work with Events, Enums or Interfaces.
If I were to do a code review of a Per Tenant Extension for an end-user and I would find any of these three I would put it into my report as “bad unless you have a damn good reason”.

This makes both this blog post and the (Anti) pattern a waste of time I guess.

Back to real life…

With love,

Marije

Using Azure Log Analytics on older Dynamics NAV versions

Sometimes there are topics that I could swear I wrote about and then someone makes you realise this is not the case.

This week that happened with my blog about what Page 9599 means when you see it popping up in Azure Telemetry.

Some folks on twitter started asking how it was possible that Super Users were changing data by running tables. I understand the confusion because in newer versions this is blocked by Microsoft.

But… older versions don’t support analyzing performance telemetry using KQL right? So this girl must be seriously confused.

Although the latter happens from time to time, this is not the case here. Because it is possible to analyse performance telemetry for older NAV versions with Azure Log Analytics and KQL.

I created some documentation around this when I created readiness materials for QBS Group earlier this year. Since not all of you are following their blog I figured it made sense to repost it on my own blog.

The Trick – Windows Event Log

To make this work, the trick is simply to enable writing content of the Windows Event Log to Azure Log Analytics and to create a few simple KQL Queries with regular expressions to analyse the data.

The result is this:

Chart1

And here is an example query

Event
| where ParameterXml contains "AppObjectType"
| extend object = strcat(extract("AppObjectType:\s{1,}([^\ ]+)\s", 1, ParameterXml), extract("AppObjectId:\s{1,}([^\ ]+)\s", 1, ParameterXml))
| extend executionTime = toint(executionTime = extract("Execution time:\s{1,}([^\ ]+)\s", 1, ParameterXml))
| extend query = strcat(extract("SELECT\s.FROM\s.WHERE\s.", 0, ParameterXml), extract("DELETE\s.FROM\s.WHERE\s.", 0, ParameterXml), extract("UPDATE\s.SET\s.WHERE\s.", 0, ParameterXml), extract("BeginTransaction\s.", 0, ParameterXml), extract("Commit\s.", 0, ParameterXml), extract("Rollback\s.", 0, ParameterXml), extract("INSERT\s.VALUES\s.", 0, ParameterXml), extract("SELECT\s.FROM\s.", 0, ParameterXml), extract("DECLARE\s.INSERT\s.", 0, ParameterXml))
| where ParameterXml contains "Message: Long running SQL statement"
| order by TimeGenerated desc

WARNING!!

Microsoft made small changes in different versions of NAV. You may need to change the regular expressions from version to version.

More Details can be found on my new github.

Enjoy,

Marije

Business Central Page 9599 | What is it?

Time for a quick blog.

Last few weeks I’ve been heads down in some performance tuning of Business Central using modern telemetry from KQL.

This is much more powerful than the old SQL Profiler since it allows you to see the stack trace in AL where the problems are caused.

AppObjectType: Page AppObjectId: 9599

This little guy was in my telemetry several times. Page 9599. And I could not find that page and the page was constantly built on a different table.

It turns our that if you run a table, a page object is built at runtime that gets page ID 9599.

The lesson I learned is that if I see this, most often an Administrator at the customer is trying to fix data.

If this happens often, talk to the customer and see if a more permanent fix can be applied. Teach the customer that doing this can kill performance for everyone using the system.