Filtering and Grouping KQL by Hour of Day or Weekday

EDIT! A friend of mine pointed me to the use of the function HourOfDay() which is native to KQL. I will leave the post as is, but you know now that this exists. Thanks Morten.

Initially you will most likely use KQL for ad-hoc analysis if a customer calls you in panic that a system is slow. But it is much better to use the telemetry to prevent issues and predict that systems get slower and fix issues before users get their eyes all wet.

For this it is important to be able to do comparisons, and in most businesses you can compare business days (monday vs. tuesday) or weekdays (this monday vs. last monday) or hours (10am vs. 4pm).

This morning I logged in to a customers system and compared the last 4 weeks per hour.

You can immediately see that the system is not used on saturday or sunday and that the system gets busier during the day before end of day.

Also you can see the effect of Christmas, but that is irrelevant for this post. 😉

I am open to suggestions but this is what I came up with to render a chart showing the busiest hours of the day.

Hour

Event
| extend hour = tostring(toint(substring(tostring(bin(TimeGenerated, 1h)),11,2)) + 1)
| extend server = strcat(extract("Server instance:\s{1,}([^\ ]+)\s", 1, ParameterXml))
| extend object = strcat(extract("AppObjectType:\s{1,}([^\ ]+)\s", 1, ParameterXml), extract("AppObjectId:\s{1,}([^\ ]+)\s", 1, ParameterXml))
| extend executionTime = toint(executionTime = extract("Execution time:\s{1,}([^\ ]+)\s", 1, ParameterXml))
| extend query = strcat(extract("SELECT\s.FROM\s.WHERE\s.", 0, ParameterXml), extract("DELETE\s.FROM\s.WHERE\s.", 0, ParameterXml), extract("UPDATE\s.SET\s.WHERE\s.*", 0, ParameterXml))
| where ParameterXml contains "Message: Long running SQL statement"
| summarize sum(executionTime) by hour
| render piechart

Note that this client is in CET so I have to manually convert for UTC.

Also, for some reason the Hour needs to be a string to be able to render as a label for the piechart.

Weekday

The WeekDay is easier if you accept that it is zero-based where sunday is zero.

Event
| extend weekday = substring(tostring(dayofweek(TimeGenerated)), 0, 1)
| extend server = strcat(extract("Server instance:\s{1,}([^\ ]+)\s", 1, ParameterXml))
| extend object = strcat(extract("AppObjectType:\s{1,}([^\ ]+)\s", 1, ParameterXml), extract("AppObjectId:\s{1,}([^\ ]+)\s", 1, ParameterXml))
| extend executionTime = toint(executionTime = extract("Execution time:\s{1,}([^\ ]+)\s", 1, ParameterXml))
| extend query = strcat(extract("SELECT\s.FROM\s.WHERE\s.", 0, ParameterXml), extract("DELETE\s.FROM\s.WHERE\s.", 0, ParameterXml), extract("UPDATE\s.SET\s.WHERE\s.*", 0, ParameterXml))
| where ParameterXml contains "Message: Long running SQL statement"
| summarize sum(executionTime) by weekday
| render piechart

Review | Automated Testing in Microsoft Dynamics 365 Business Central

Time flies if you are having fun. It seems like yesterday that Luc van Vugt had published his first book about Automated Testing and recently the Second Edition has been released.

Everyone in our community knows, or should know, that having the Automated Testing book from Luc is mandatory. Luc is the authority when it comes to this subject.

But the question is, why should you buy this second edition?

Well, because a lot has changed since the first version. And this book is twice as thick!

Why has so much changed?

Testing as such has not, so why is this book so much thicker and better than the previous edition?

The main reason is that our world has changed and proffessionalized. Integration of Business Central with tools like CI/CD has improved drastically.

My favorite chapter?

Luc has added a chapter on how to write testable code. In my experience this is the most important chapter, as a developer.

If a developer writes code that is hard to test, has dependencies, or is too large the testing becomes incredibly difficult.

Since our community has a lot of legacy code we also have a lot of code that was never designed to be tested automatically.

Refactoring has always been hard in our community which drives on open code. With the move to an extension model this is no longer an excuse and now is the time to start refactoring code to improve testability.

Good job Luc! And congratulations on this achievement.

Get the book here.

Best Practices for (Per Tenant) Extensions | Protect Yourself

Time to get back to Best Practices for Per Tenant Extensions.

This time we are going to discuss something that in my opinion should also be implemented by ISV’s in their AppSource solutions.

By default, AL Objects are extensible. This means that everyone can take a dependency on your extension and therefor Microsoft does not allow you to refactor any code once it lands on AppSource.

The solution is simple, but since it’s manual it requires extra discipline.

My recomendation would be to, by default, make all tables, pages and codeunits extensible = false and access = internal.

This means others cannot reuse your code and therefore you can change signatures of procedures, rename them and refactor your code.

Examples

table 50100 MyTable
{    
    DataClassification = ToBeClassified;    
    Extensible = false;
    
    fields    
    {        
        field(1; MyField; Integer) { }
    }        
    
    internal procedure MyProcedure()    
    begin        
        Message('Foo');    
    end;
}
page 50100 MyPage
{
    PageType = Card;
    ApplicationArea = All;
    UsageCategory = Administration;
    SourceTable = MyTable;
    Extensible = false;

    layout
    {
        area(Content)
        {
            group(GroupName)
            {
                field(Name; Rec.MyField) { }
            }
        }
    }

}
codeunit 50100 MyCodeunit
{
    Access = Internal;
    
    trigger OnRun()
    begin
        
    end;
    
    var
        myInt: Integer;
}

If you are an ISV, your reselling and implementationpartners can request an object to be opened up if they have a business reason for it.

Read More

If you want to read more about my Per Tenant Best Practices you can read previous posts.

Why best practices for Per Tenant Extensions?

One Per Tenant Extension to ‘Rule Them All’

Organizing your “big PTE” in Micro Services

PreProcessorSymbols & Per Tenant Extension Best Practices

Extending the same object twice in one Extension

Do you have feedback?

I love it when people have feedback and enjoy answering questions.

What I don’t like is polarization and social media cancel culture. Everybody has the right to their opinion and eveyone has the right to make mistakes and learn from it. Me included.

If you have to assign an advisory board, would you have a group of people with the same option that just say “yes” or would you like to be challenged with different opinions?

Again, with love and enjoy your “Sinterklaas” weekend

Marije

Oh, TempBlob! What did you do?

The alternative title for this blog post would have been something like… TempBlob, why did you waste my time! Or waste thousands of hours accross our community.

The topics of my blogs tend to be about what happens in the freelance projects I work on, and last week this was two extensions that have a substantial size (1000+ objects) that had to be BC19 Compatible.

BC19 is the first version of Business Central where warnings about obsoleted objects became errors. The most commonly used are TempBlob and Language.

Language

Language is easy. Functions that used to exist in the table moved to a codeunit and the codeunit has the same name.

In both projects doing a Find/Replace on Language: Record with Language: Codeunit was enough.

Unfortunately for those who use Hungarian Notation, You also have to change your variable names.

TempBlob

This one is a lot more difficult. Not because the Codeunit has a Space in the name, but because the nature of the “Blob” field.

In Saas, the Blob field is the only way to create streams and it requires quite a bit of coding around to work with the obsoleted troubles.

The “Fix”

In both projects I fixed it by creating a new table called “TLA TempBlob” where TLA stands for the Three Letter Abbreviation of the partner on AppSource.

This new table looks like this

table 50500 "PTE Blob"
{
    TableType = Temporary;
    DataClassification = ToBeClassified;

fields

{
    field(1; "Primary Key"; Code[1]) { }

    field(2; Blob; Blob) { }
}

keys
{
    key(Key1; "Primary Key")  { Clustered = true; }
}

procedure MoreTextLines(): Boolean
begin

    IF NOT ReadLinesInitialized THEN
        StartReadingTextLines(TEXTENCODING::Windows);

    EXIT(NOT GlobalInStream.EOS);
end;

procedure ReadTextLine(): Text
var
    ContentLine: Text;
begin
    IF NOT MoreTextLines THEN
        EXIT('');

    GlobalInStream.READTEXT(ContentLine);
    EXIT(ContentLine);
end;

procedure ReadAsText(LineSeparator: Text; Encoding: Textencoding) Content: Text

var
    InStream: InStream;
    ContentLine: Text;

begin
    Blob.CREATEINSTREAM(InStream, Encoding);

    InStream.READTEXT(Content);

    WHILE NOT InStream.EOS DO BEGIN
        InStream.READTEXT(ContentLine);
        Content += LineSeparator + ContentLine;
    END;
end;

procedure WriteAsText(Content: Text; Encoding: Textencoding)
var
    OutStr: OutStream;
begin
    CLEAR(Blob);
    IF Content = '' THEN
        EXIT;

    Blob.CREATEOUTSTREAM(OutStr, Encoding);
    OutStr.WRITETEXT(Content);
end;

procedure StartReadingTextLines(Encoding: TextEncoding)

begin
    Blob.CREATEINSTREAM(GlobalInStream, Encoding);
    ReadLinesInitialized := TRUE;
end;

var

    GlobalInStream: InStream;
    GlobalOutStream: OutStream;
    ReadLinesInitialized: Boolean;
    WriteLinesInitialized: Boolean;
}

I know that I am not the only one with this solution. All accross AppSource each App has it’s own new TempBlob table, simply because a Codeunit does not allow the use of the Blob fieldtype as variabletype.

TableType = Temporary

The reason Microsoft obsoleted TempBlob is to prevent people to declare this object without the Temporary tag.

When this happened TableType Temporary did not yet exist.

Now this is the case.

Other Changes

There is one other thing I ran into that I wanted to share.

On a lot of pages, Name 2 and Description 2 are added by Microsoft InVisible. They also removed a few fields.

Removing meant I ran into an issue with AddAfter. This was solved by changing to AddLast, following the Per Tenant Best Practices that you can find elsewhere on this website.

Thank you, with love…

Marije

“GENERIC METHOD” | Brilliant or Anti Pattern?

I’ve been in doubt if I should write this post or not. Read it fast as it may disapear if I regret writing it.

Ever since I started working with Navision, almost 25 years ago, I’ve had my own little stuborn ideas. These ideas got me where I am today, but just as often they got me in big trouble and caused personal relationships to shatter right before my eyes.

I wrote a lot about the legacy of Navision behind Business Central, the good, the bad and the ugly.

Today I want to talk about events, and why they are bad.

Wait!

Events are bad? But… events are the whole backbone of the extensibility model. How can they be bad.

In order to understand that we first need to talk about Interfaces and Extensible Enums.

Progress in any development language or software framework can cause what was good yesterday to be something to avoid tomorrow, or even today.

Vedbaek, a few years ago…

Let’s rewind the clock a few years. Imagine C/Side with Hooks and Delta Files.

If this does not ring a bell, you are probably not old enough to understand this part of the blog and you can skip to the next paragraph.

A few years ago, and I’ve written many articles about this, Microsoft choose one of the SMB ERP Systems to go to the cloud. They only wanted to invest in one, not in three. Dynamics NAV was the choosen one.

The cloud needed a more mature extensibility model, and NAV had Delta Files and Hooks. This was choosen as the bases for the extension model we have today.

Part of this model was built in C/Side, which ended up being what we now know as “events”. Other parts were built outside C/Side and are what we now know as Table Extensions and Page Extensions. The first version did not offer an IDE for these objects and were tidous to work with.

What happened after that is history. The model grew into a new compiler that is compatible with Visual Studio Code and half a million events were added to a 30 year old application.

1.800 Apps in Microsoft AppSource are built on this model and used everyday.

So why is that bad?

It’s not bad, per se. But it is very tidious and it makes the framework very difficult to work with for junior developers.

Finding your way in thousands and thousands of events require very thourough knowledge of the old Navision application. Since there are only “few” of those it puts high constraignts on the growth of our ecosystem and make salaries for experienced developers go skyrocket.

Events have in between each other no relationship whatsoever. A few weeks ago I was talking to a friend who tried to enhance the Item Tracking module and he had to subscribe to 30+ events accross the system to complete a task.

In another case I was consulting a group of freelancers. They complained that they could never go to AppSource because they had heavily customized the Sales and Purchase posting processes.

My response, as a joke, was that Microsoft has built in the Generic Method pattern to override these posting routines with your own copy. The reason for making it a joke is that I thought (naive a girl as I am) that no sane developer would ever consider doing this.

Their response, to my surprise, was just a “thank you for this great suggestion, we will implement this”.

A third real life story, was a small consultation I did for a partner in Germany that offers a payment add-on that is very succesful. They are on AppSource and started to find out that aparently their App is not compatible with all of the other Apps. In other words, other Apps break their solution.

The reason for this is the “Handled Pattern” which is part of the Generic Method pattern which by itself is also an Anti Pattern but the only solution we had until the introduction of interfaces.

If two Apps subscribe to the same event and one handles it before the other get’s chance… the system fails.

And when someone decides to “override” posting routines the events in the original code are skipped.

Interfaces to the rescue

In my humble opinion, events should be marked as “obsolete pending” in favour of interfaces.

For Example: Sales Post

In Business Central, a Sales Document can have a few types such as quote, order, invoice or credit memo. There are a few more and partners can add new ones.

In my opinion a Sales Document Type should have methods that are implemented as part of an interface, such as “Release”, “Calculate Discount”, “Print” or whatever. Anything that is an action on a page.

If a partner really wants to override how the system works (which is bad enough to start with) they are then required to make their own Document Types. This shows a clear intention that they want the system to behave differently and it also allows other apps to auto-detect if they are compatible with this new implementation.

A Payment System, like the German one, should also replace the Payment System from Microsoft if they think they can do a better job.

Someone making a new Sales Document Type can still call the orriginal Payment Interface in the right places and allow other Payment systems to run nontheless.

Keep on dreaming Marije

A girl can dream right? I fully understand that the above situation will never happen.

Business Central was built on Navision and is it’s own legacy system and events, once our favorite, is now something from the past that is replaced with a better alternative.

Microsoft can never replace the events in the Base App with proper interfaces. The code is simply to old and events are all over the place.

Another problem is that an event publisher is married into it’s caller object. I remember in the very first discussions I had with Thomas Hejlsberg I suggested that an event should be able to move around when refactoring requires without breaking it’s subscribers. Unfortunately this never got implemented.

What about ISV’s?

Microsoft is always ahead of the game when compared to ISV’s. In the last releases of PrintVis we released a total of four interfaces that all serve a functional purpose. If a user or a partner of PrintVis is unhappy with how the interface behaves, they can imlement their own version.

If you have read my thoughts on best practices for Per Tenant Extensions you should have also seen that I don’t recommend anyone other than Microsoft or an ISV to work with Events, Enums or Interfaces.
If I were to do a code review of a Per Tenant Extension for an end-user and I would find any of these three I would put it into my report as “bad unless you have a damn good reason”.

This makes both this blog post and the (Anti) pattern a waste of time I guess.

Back to real life…

With love,

Marije

Using Azure Log Analytics on older Dynamics NAV versions

Sometimes there are topics that I could swear I wrote about and then someone makes you realise this is not the case.

This week that happened with my blog about what Page 9599 means when you see it popping up in Azure Telemetry.

Some folks on twitter started asking how it was possible that Super Users were changing data by running tables. I understand the confusion because in newer versions this is blocked by Microsoft.

But… older versions don’t support analyzing performance telemetry using KQL right? So this girl must be seriously confused.

Although the latter happens from time to time, this is not the case here. Because it is possible to analyse performance telemetry for older NAV versions with Azure Log Analytics and KQL.

I created some documentation around this when I created readiness materials for QBS Group earlier this year. Since not all of you are following their blog I figured it made sense to repost it on my own blog.

The Trick – Windows Event Log

To make this work, the trick is simply to enable writing content of the Windows Event Log to Azure Log Analytics and to create a few simple KQL Queries with regular expressions to analyse the data.

The result is this:

Chart1

And here is an example query

Event
| where ParameterXml contains "AppObjectType"
| extend object = strcat(extract("AppObjectType:\s{1,}([^\ ]+)\s", 1, ParameterXml), extract("AppObjectId:\s{1,}([^\ ]+)\s", 1, ParameterXml))
| extend executionTime = toint(executionTime = extract("Execution time:\s{1,}([^\ ]+)\s", 1, ParameterXml))
| extend query = strcat(extract("SELECT\s.FROM\s.WHERE\s.", 0, ParameterXml), extract("DELETE\s.FROM\s.WHERE\s.", 0, ParameterXml), extract("UPDATE\s.SET\s.WHERE\s.", 0, ParameterXml), extract("BeginTransaction\s.", 0, ParameterXml), extract("Commit\s.", 0, ParameterXml), extract("Rollback\s.", 0, ParameterXml), extract("INSERT\s.VALUES\s.", 0, ParameterXml), extract("SELECT\s.FROM\s.", 0, ParameterXml), extract("DECLARE\s.INSERT\s.", 0, ParameterXml))
| where ParameterXml contains "Message: Long running SQL statement"
| order by TimeGenerated desc

WARNING!!

Microsoft made small changes in different versions of NAV. You may need to change the regular expressions from version to version.

More Details can be found on my new github.

Enjoy,

Marije

Business Central Page 9599 | What is it?

Time for a quick blog.

Last few weeks I’ve been heads down in some performance tuning of Business Central using modern telemetry from KQL.

This is much more powerful than the old SQL Profiler since it allows you to see the stack trace in AL where the problems are caused.

AppObjectType: Page AppObjectId: 9599

This little guy was in my telemetry several times. Page 9599. And I could not find that page and the page was constantly built on a different table.

It turns our that if you run a table, a page object is built at runtime that gets page ID 9599.

The lesson I learned is that if I see this, most often an Administrator at the customer is trying to fix data.

If this happens often, talk to the customer and see if a more permanent fix can be applied. Teach the customer that doing this can kill performance for everyone using the system.

Building a strong, modern community | Tips & Tricks

The world is rapidly changing and with that, the way we interact and consume is also different today than it was yesterday.

In the last 10 weeks I had the pleasure of working on an interesting assignment for one of my customers to help them improve the interaction with their partners with a strong focus on technical content.

At first I was not that eager to start on it. It did not match my personal ambition of going back into technical troubleshooting and learning more about Azure and Dataverse.

Then I figured, why the hell not. I’ve done it before and I have been part of this digital transformation for years.

I like it when I’m challenged to formalize what I naturally do in a format that is reusable by others. The Design Patterns project and book are an excelent example. I don’t see a reason why we cannot do that with something less technical. Something like a “design pattern” for building a community.

The first communities happened more or less by accident and had a very nerdy character. Within the Business Central world DynamicsUsers and Mibuso.com are the best examples. The latter possibly the best where the story is that Luc van Dyck only meant to have a website to keep track of Navision stock prices which grew to what we know today as the BCTechDays event.

Later communities are created “by design” when marketing departments learned about the commercial value of the concept.

The most recent communities I worked on were the How Do I video’s for Microsoft, NAV Skills, the ForNAV Coffee breaks and QBS. I tried to analyze these gigs and see if I could transform them into a “recipe” for building a community.

Here is what I came up with… love to hear thoughts.

Step 1 – Pick a topic

To have interaction with a technical audience you need a good topic. Every odd six months this can be as easy as what’s new in vNext or you can check with hot topics from support.
Support is a great source of inspiration. It’s where things get fixed that go wrong. It does not have to be a programming bug, these are actually not good to use, it is better to pick a question that required someone to spend some time investigating. This will give the audience the feeling they are getting something in return for their time. Remember they also put in an hour or two of their week / month.
Don’t try to put too much in one webinar. It’s better to prepare something thoroughly. If you want to combine, make sure the topics are similar.

Step 2 – Prepare your video/demo

People love a live demo, but there is also a big risk that it can go wrong. Make sure you know what you are doing if you go live.
If your demo requires anything that takes time to prepare you can either choose to record it, or if your demo for example requires a machine to install software, make sure you have a second machine prepared where you can continue on the next step.
The advantage of a webinar, even if they are live, that they can be edited before putting the recording online.
Write down your text if you are unsure if you can remember what you want to say. Once you are more experienced you can write down keywords.
If your demo/story requires clarification, make sure to have a supporting PowerPoint, but remember that it’s a tool, not a goal. Your demo is what is most important.
Your PowerPoint should contain keywords and bullet points. A PowerPoint never contains sentences that others can read. The danger is that you will read what’s on the slides, which takes away the focus on the story. People may mute sound and fast forward the recording of your webinar.

Step 3 – Have a Fixed Format

Even though you probably do this webinar every week or month, the audience may attend for the first time. Each webinar should follow the same pattern with an introduction. This allows regular attendees to focus on their work during the first few minutes. You can choose to mix a general explanation and welcome with news about your community.
Do not, ever, never record the interactive part of the webinar. This ensures that the attendees are comfortable asking questions without fear of having a recording available.
If you have questions that are important to the story, make sure to record a Q & A afterwards and include it in the posted video.

Step 4 – Send out invites

Your audience is trying to run a business. They are busy and time is money. Make sure to remind them of your webinar and make sure the topic is clear. They may choose to skip it, not because they don’t like you but because the topic is something they already know about it or they may choose to watch the recording later.
Always link to the previous recordings in your newsletter.

Step 5 – Write a blog with the recording

After the webinar is completed and you’ve edited the recording you can write a short blog to go along with it.
Don’t try to repeat the content of the recording. Instead make sure that after reading your blog the audience wants to watch the recording.
At the end of the blog there should be a link to subscribe to the email that invites the reader to the next webinar.
Make sure to promote the RSS feed of the blog.

Step 6 – Promote the blog on Social Media

Share the url of the blog on Twitter and LinkedIn. Be careful not to overdo it. Social media platforms have smart algorithms to show content. It does not help to ask everyone in the team to share something as it will simply be filtered out or even be hidden as the content will not be unique.
The platforms are also smart against having the same people like the same kind of content over and over.

The most important ingredient

A lot of companies are making an attempt to build a community and if I would have to guess, less than half make it and are a succes. The ones that make it have strong, unique and honest content. The most commonly made mistakes is to make it too obvious that your community has a commercial character.

That does not mean your platform cannot support your business. Everyone understands in the year 2021 that a blog, mailinglist or videochannel has a commercial reason. Just make there there is balance.

One last tip!

Video content is hot and it works well with a blog. This means that to be succesful you need to learn video editing.

Ever since I started doing video I’ve used the software of Camtasia. The great people of TechSmith have let me use their software for free because of my community influence. I thought that after this many years a big shout out is well deserved. Thanks guys!

Extending the same object twice in one Extension

I’ll be honest. I was a bit disapointed after I had published my previous blog. Not about the content but about the number of people commenting and replying on twitter.

I talked to a few people in person and they said that it was a bit complex and maybe not everyone completely got what the problem was that I was trying to solve.

The problem is simple. You cannot extend the same table, page or report more than once in one AL project.

Did you know that in the prototype of AL that was given to us as christmas present several years ago it actually did work? At least you could create a second extensions without fields.

I complimented Microsoft about it. I was happy that we could “put things where they belong” in a project with a proper structure.

The answer was “oops”, this was not supposed to work, because you can only have one SQL companion table per table.

I bet that if they spent a few more hours they could have made the compiler and engine smart enough to group these together and solve the problem alltogether.

Why did you say again we need this?

If you want to organize your PTE in modules you need this. I wrote about this in one of my earlier blogs.

And what is the workaround?

That is working with PreProcessorSymbols and Code Cloning.

Maybe that clarifies the blog I wrote a few days ago and get a bit more comments up and running.

Can we fix the Code Cloning?

Yes! I also discussed this with a few people and hopefully I will blog about that somewhere next week.

Enjoy,

Marije

PreProcessorSymbols & Per Tenant Extension Best Practices

Let’s continue where we left off last week when I shared with you two blog posts about my opinion regarding best practices for Per Tenant Extensions.

I used you as a guineapig for the project I am currently working on a PrintVis to get some early feedback from the community before I pitched my ideas to the development team there.

In short I can say that it went both good and bad and I’ll explain a bit why.

The biggest problem is perhaps that an average PrintVis implementation does not require that many customizations. The solution has been implemented about 400 times in 25 years and it is very mature. Most projects would not have more than the “Core” and “Report Pack” folder.

That does not mean they did not like the idea of having more complex modules in separate folders and make them compile individually.

At first I thought that the next blogpost in this series would be about the folder structure of the “Core” module, but I decided to skip that until the next post and move to the most frequently asked question I got from both the PrintVis developers and the community.

How the heck do you work around not having dependencies and multiple table and page extensions in one project?

— Everyone…

The solution here came from my good friend Michael Nielsen as he pointed me in the right direction.

PreProcessorSymbols

The AL development language is based on C# even though it’s core syntax is based on pascal. – Confusing –

Everything we do in AL is actually converted into C# code. In the old days you could even debug this generated code. I cannot believe I am calling this the old days since I remember the demo at Directions NA like yesterday. I am getting old.

Since C# is essentially the base of our language, most new features we get are actually copied from this into AL. We are moving into a hybrid Pascal/C# language. #Fun…

A very clear example of this is the use of the Dictionary type which works almost exactly the same as in C# and allows you to run AL code a million times faster than the old temporary tables did.

Another thing we got from C# are PreProcessorSymbols. They have been with us for quite a while and they are extremely powerful for clean code fanatics like me.

What does it do?

The first thing you need to do is to add the PreProcessorSymbols tag to one of the App.json files you are working with.

Personally I recommend adding it to your PerTenantExtension and code exceptions against it. This way your modules don’t need it in the app.json and you cannot forget to put them in or remove it when maintaining them in their own Git repositories.

As you know, I like descriptive names, so we call this one “PerTenantExtension”.

The next thing you do is to add exception code to duplicate objects. Whenever you need a table extension or a page extension in a module, add it in two places and use this in the module folder

This means that if your app.json file contains the PerTenantExtension tag it will compile the code, but else it will ignore it.

But this is code cloning!

Yes it is. And that is all that can be said about it. It is duplicate code, it is error prone and it requires discipline.

Unfortunately this is the only way right now.

BUT!!!

Not all is lost. What if we find a way to manage this somehow with a Visual Studio Code extension? What if there were an extension that “recognises” this tag and handles this for us in our “Core” extension.

After my miserably failed webinar I got a few offers from community members to investigate this and I plan to spend some time trying to get this organized.

And what about Microsoft?

Another solution could be that Microsoft pitches in and allow us to have multiple Table and Page extensions in one project and merge them into one C# file at compile time.

It would be wonderful if they could do that, but as there are procedures we probably first need community buy in, pitch it as an idea on the ideas website and then upvote it.

That may take some time, but it may be worth it.

It’s worth the discipline!

If you want my personal opinion, it’s worth the effort and discipline. If I were owner of a Business Central shop with a few hundred customers this is what allows you to manage customizations without the hassle of dependencies, maintaining AppSource apps and more.

Customers will be on different versions right?

Let’s compare this way of working to dependencies and AppSource.

Personally I think dependencies belong in AppSource. It’s way too complicated to maintain dependencies for multiple Per Tenant Extensions. It may be possible when you are doing the initial implementation and everything still lives in your head, but if the customer goes into production you’ll forget. Someone else needs to maintain it and the’ll spend ours untangling your dependencies.

“When I wrote this, only God and I understood what I was doing. Now, God only knows.”

— Unkonwn

Do customers really want updates?

When customers are happy and up and running they often don’t want updates.

Let’s say that after the first implementation you took a module and added things for a second customer, do you really think your first customer actually cares? And you may have introduced a bug for the initial customer.

If you clone a module into a Per Tenant Extension your customer will be on that version until you explicitly decide to upgrade them and then you can manage it.

You can have a situation where you visit the customer six months after production, have a cup of coffee and tell them how you enhanced the code and sell them an upgrade, with some consultancy hours to.

If your module were on AppSource the customer would have gotten it for free at a time they did not want it and be upset and demand you to spend time fix it for free.

Your Feedback Matters!

Best Practices only work in a community! I enjoyed all of last weeks comments and used them to improve and learn. Please continue to leave comments here, on Twitter, LinkedIn or simply send me an email.

Thanks and with love,

Marije