The world is rapidly changing and with that, the way we interact and consume is also different today than it was yesterday.
In the last 10 weeks I had the pleasure of working on an interesting assignment for one of my customers to help them improve the interaction with their partners with a strong focus on technical content.
At first I was not that eager to start on it. It did not match my personal ambition of going back into technical troubleshooting and learning more about Azure and Dataverse.
Then I figured, why the hell not. I’ve done it before and I have been part of this digital transformation for years.
I like it when I’m challenged to formalize what I naturally do in a format that is reusable by others. The Design Patterns project and book are an excelent example. I don’t see a reason why we cannot do that with something less technical. Something like a “design pattern” for building a community.
The first communities happened more or less by accident and had a very nerdy character. Within the Business Central world DynamicsUsers and Mibuso.com are the best examples. The latter possibly the best where the story is that Luc van Dyck only meant to have a website to keep track of Navision stock prices which grew to what we know today as the BCTechDays event.
Later communities are created “by design” when marketing departments learned about the commercial value of the concept.
The most recent communities I worked on were the How Do I video’s for Microsoft, NAV Skills, the ForNAV Coffee breaks and QBS. I tried to analyze these gigs and see if I could transform them into a “recipe” for building a community.
Here is what I came up with… love to hear thoughts.
Step 1 – Pick a topic
To have interaction with a technical audience you need a good topic. Every odd six months this can be as easy as what’s new in vNext or you can check with hot topics from support. Support is a great source of inspiration. It’s where things get fixed that go wrong. It does not have to be a programming bug, these are actually not good to use, it is better to pick a question that required someone to spend some time investigating. This will give the audience the feeling they are getting something in return for their time. Remember they also put in an hour or two of their week / month. Don’t try to put too much in one webinar. It’s better to prepare something thoroughly. If you want to combine, make sure the topics are similar.
Step 2 – Prepare your video/demo
People love a live demo, but there is also a big risk that it can go wrong. Make sure you know what you are doing if you go live. If your demo requires anything that takes time to prepare you can either choose to record it, or if your demo for example requires a machine to install software, make sure you have a second machine prepared where you can continue on the next step. The advantage of a webinar, even if they are live, that they can be edited before putting the recording online. Write down your text if you are unsure if you can remember what you want to say. Once you are more experienced you can write down keywords. If your demo/story requires clarification, make sure to have a supporting PowerPoint, but remember that it’s a tool, not a goal. Your demo is what is most important. Your PowerPoint should contain keywords and bullet points. A PowerPoint never contains sentences that others can read. The danger is that you will read what’s on the slides, which takes away the focus on the story. People may mute sound and fast forward the recording of your webinar.
Step 3 – Have a Fixed Format
Even though you probably do this webinar every week or month, the audience may attend for the first time. Each webinar should follow the same pattern with an introduction. This allows regular attendees to focus on their work during the first few minutes. You can choose to mix a general explanation and welcome with news about your community. Do not, ever, never record the interactive part of the webinar. This ensures that the attendees are comfortable asking questions without fear of having a recording available. If you have questions that are important to the story, make sure to record a Q & A afterwards and include it in the posted video.
Step 4 – Send out invites
Your audience is trying to run a business. They are busy and time is money. Make sure to remind them of your webinar and make sure the topic is clear. They may choose to skip it, not because they don’t like you but because the topic is something they already know about it or they may choose to watch the recording later. Always link to the previous recordings in your newsletter.
Step 5 – Write a blog with the recording
After the webinar is completed and you’ve edited the recording you can write a short blog to go along with it. Don’t try to repeat the content of the recording. Instead make sure that after reading your blog the audience wants to watch the recording. At the end of the blog there should be a link to subscribe to the email that invites the reader to the next webinar. Make sure to promote the RSS feed of the blog.
Step 6 – Promote the blog on Social Media
Share the url of the blog on Twitter and LinkedIn. Be careful not to overdo it. Social media platforms have smart algorithms to show content. It does not help to ask everyone in the team to share something as it will simply be filtered out or even be hidden as the content will not be unique. The platforms are also smart against having the same people like the same kind of content over and over.
The most important ingredient
A lot of companies are making an attempt to build a community and if I would have to guess, less than half make it and are a succes. The ones that make it have strong, unique and honest content. The most commonly made mistakes is to make it too obvious that your community has a commercial character.
That does not mean your platform cannot support your business. Everyone understands in the year 2021 that a blog, mailinglist or videochannel has a commercial reason. Just make there there is balance.
One last tip!
Video content is hot and it works well with a blog. This means that to be succesful you need to learn video editing.
Ever since I started doing video I’ve used the software of Camtasia. The great people of TechSmith have let me use their software for free because of my community influence. I thought that after this many years a big shout out is well deserved. Thanks guys!
Let’s continue where we left off last week when I shared with you two blog posts about my opinion regarding best practices for Per Tenant Extensions.
I used you as a guineapig for the project I am currently working on a PrintVis to get some early feedback from the community before I pitched my ideas to the development team there.
In short I can say that it went both good and bad and I’ll explain a bit why.
The biggest problem is perhaps that an average PrintVis implementation does not require that many customizations. The solution has been implemented about 400 times in 25 years and it is very mature. Most projects would not have more than the “Core” and “Report Pack” folder.
That does not mean they did not like the idea of having more complex modules in separate folders and make them compile individually.
At first I thought that the next blogpost in this series would be about the folder structure of the “Core” module, but I decided to skip that until the next post and move to the most frequently asked question I got from both the PrintVis developers and the community.
How the heck do you work around not having dependencies and multiple table and page extensions in one project?
The solution here came from my good friend Michael Nielsen as he pointed me in the right direction.
The AL development language is based on C# even though it’s core syntax is based on pascal. – Confusing –
Everything we do in AL is actually converted into C# code. In the old days you could even debug this generated code. I cannot believe I am calling this the old days since I remember the demo at Directions NA like yesterday. I am getting old.
Since C# is essentially the base of our language, most new features we get are actually copied from this into AL. We are moving into a hybrid Pascal/C# language. #Fun…
A very clear example of this is the use of the Dictionary type which works almost exactly the same as in C# and allows you to run AL code a million times faster than the old temporary tables did.
Another thing we got from C# are PreProcessorSymbols. They have been with us for quite a while and they are extremely powerful for clean code fanatics like me.
What does it do?
The first thing you need to do is to add the PreProcessorSymbols tag to one of the App.json files you are working with.
Personally I recommend adding it to your PerTenantExtension and code exceptions against it. This way your modules don’t need it in the app.json and you cannot forget to put them in or remove it when maintaining them in their own Git repositories.
As you know, I like descriptive names, so we call this one “PerTenantExtension”.
The next thing you do is to add exception code to duplicate objects. Whenever you need a table extension or a page extension in a module, add it in two places and use this in the module folder
This means that if your app.json file contains the PerTenantExtension tag it will compile the code, but else it will ignore it.
But this is code cloning!
Yes it is. And that is all that can be said about it. It is duplicate code, it is error prone and it requires discipline.
Unfortunately this is the only way right now.
Not all is lost. What if we find a way to manage this somehow with a Visual Studio Code extension? What if there were an extension that “recognises” this tag and handles this for us in our “Core” extension.
After my miserably failed webinar I got a few offers from community members to investigate this and I plan to spend some time trying to get this organized.
And what about Microsoft?
Another solution could be that Microsoft pitches in and allow us to have multiple Table and Page extensions in one project and merge them into one C# file at compile time.
It would be wonderful if they could do that, but as there are procedures we probably first need community buy in, pitch it as an idea on the ideas website and then upvote it.
That may take some time, but it may be worth it.
It’s worth the discipline!
If you want my personal opinion, it’s worth the effort and discipline. If I were owner of a Business Central shop with a few hundred customers this is what allows you to manage customizations without the hassle of dependencies, maintaining AppSource apps and more.
Customers will be on different versions right?
Let’s compare this way of working to dependencies and AppSource.
Personally I think dependencies belong in AppSource. It’s way too complicated to maintain dependencies for multiple Per Tenant Extensions. It may be possible when you are doing the initial implementation and everything still lives in your head, but if the customer goes into production you’ll forget. Someone else needs to maintain it and the’ll spend ours untangling your dependencies.
“When I wrote this, only God and I understood what I was doing. Now, God only knows.”
Do customers really want updates?
When customers are happy and up and running they often don’t want updates.
Let’s say that after the first implementation you took a module and added things for a second customer, do you really think your first customer actually cares? And you may have introduced a bug for the initial customer.
If you clone a module into a Per Tenant Extension your customer will be on that version until you explicitly decide to upgrade them and then you can manage it.
You can have a situation where you visit the customer six months after production, have a cup of coffee and tell them how you enhanced the code and sell them an upgrade, with some consultancy hours to.
If your module were on AppSource the customer would have gotten it for free at a time they did not want it and be upset and demand you to spend time fix it for free.
Your Feedback Matters!
Best Practices only work in a community! I enjoyed all of last weeks comments and used them to improve and learn. Please continue to leave comments here, on Twitter, LinkedIn or simply send me an email.
Plans have changed for tonight. Sorry Rene, we’ll try again on thursday. So at home and time to share the next part of the best practices for Per Tenant Extensions.
And just in time, because when I shared part I last night there was confusion right away on Twitter. Especially about confusing shipping all your modifications as one “big” Per Tenant Extensions with making a monolyth.
Let’s look at Microsoft Windows, or Office. Just because Microsoft packages it as s product does not make the development into a monolyth. You can develop in Modules and ship as a software package.
The “Perfect” Example
It’s already a few years ago that Microsoft took our community by surprise and broke the “Functional App” and the “System Modules” into two extensions.
The Functional App is still a Monolyth. It’s almost impossible to break into modules that compile independently. A few years ago we tried with a few people from the community to do so and failed. Mainly because we lacked the availabilty of enums and interfaces but anyway, the cost of doing it does not weigh up against the benefits and the “big app” stayed big.
The System App is a totally different story. Microsoft developed it as modules that compile individually. You can see this on GitHub.
When we get the product on the “DVD” or create a dependency we can see that even though MSFT makes small modules, they ship a big .app file.
This is also normal in other software frameworks and I’ve seen this also for example when a product like ForNAV is packaged. The fact that design.exe is 75MB does not mean it is coded in one large C# file.
How do you implement this in your PTE Project?
The trick to code in modules and ship as software starts with your folder structure.
Please tell me how many times have you seen this as a “structure” for an AL project:
This is horrible and let’s call this an Anti-Pattern from now on.
The Best Practice is to organize your extension in meaningfull foldernames like this
This is not something I “invented” or Microsoft invented. I actually stole this idea from an Uncle Bob video I saw a few years ago. Aparently other programming frameworks also have a tendency to group code by nerdy categories rather than functional elements.
But with our AL framework we have an extra benefit. I’ve marked that benefit in RED.
You see that my “Big App” has an app.json but my “Core App” also has an app.json.
This allows me to open the “Core” folder separate in Visual Studio Code and compile it as a “Micro Service”.
Every Per Tenant Extension has at least two sub modules. Core and Report Pack. With PrintVis we also have “typical PrintVis things” like Calculation Formula’s and Status Code triggers. This will be different for other vertical solutions.
Reusable modules go into Feature folders with their own App.json that compile separately and can be easily reused. I’ll explain in one of the next posts.
One of the biggest advantages of this way of organizing your code is that the folder stucture becomes your documentation. You can see exactly what the customer wanted customized without even opening an object.
I’ll dive deeper into this in the next blog where I describe the rules for the “Core” extension. Hopefully tomorrow.
Wait a minute Marije, this is not new is it?
No it is not. This way of working could actually have been done in C/Side already and many of us did.
In C/Side we did not have an app.json. Instead we had a “Version List” for an object and a “Documentation” property for table fields.
Many of us have been working like this “unofficially” for decades and had great business succes. Why should we not keep on doing that.
The “Because we can” Anti-pattern
If you ask me, the biggest Anti-pattern we have these days in the Business Central community is the argument “Because we (finally) can”. And this is costing us very valuable time.
I see great AL developers become PowerShell experts, CI/CD gods and Docker Guru’s and all for no reason whatsoever.
We need to look back at C/Side and take back what made us succesful and stop doing things just because we can without adding value for customers.
This and this alone will solve the capactiy problem we have in our community. It is the elephant in the room that for some reason nobody wants to see.
Never shy away from a catchy title of your blog right?
So enough said about why we need Design Patterns AKA Best Practices for Per Tenant Extensions, let’s dive into my suggestions.
Rule #1 – As few as possible
If you implement Business Central you should have a close fit to your business requirements. The times when we hacked the base app into anything are gone except for a few rusty brown partners that are allowed to still do so.
On top of the Base App you mix and match AppSource solutions. Typically 1 vertical solution and a few horizontal ones. Like for example PrintVis with ForNAV and Continia as something that everyone would understand.
What you then do with your “Per Tenant Extensions” is essentially defining exceptions on metadata level with a few scripts that you want to execute whenever something happens in the system. “Per Tenant Extensions” are not massive blocks of vertical business processes except in few cases but but as a general rule of thumb.
You may have small processes in your PTE that we would call “features”. In many cases they relate to an interface with this and that or creating an excel sheet that goes somewhere in the organization that is fairly unique to a company. You may however reuse these features in other implementations. Not exactly the same but, you know, as inspiration for starting on the next adventure.
The Anti-Pattern – Many Small Per Tenant Extensions
Design Patterns are often easier to explain by telling how not to do things and/or what can go wrong if you don’t follow them.
The biggest problem with having many smaller Per Tenant Extensions is the lack of overview and manageability. I’ve seen first hand situations where source code got lost, different programming styles were applied, objects ID’s were difficult to manage etcetera etcetera. I am sure you can write something in the comments that also illustrates what can go wrong if you start defining too many PTE’s.
Another challenge that should probably not be a challenge but still is, is performance when extending tables in the Base App or in our case PrintVis multiple times.
Even though Microsoft has done a lot to improve the behaviour, it is still not completely “nice” to have all these extensions and in case of a PTE there is a fair chance you want all “your” fields added to list pages anyway right?
But what about reusability?
This I agree is a problem with a big PTE and I promise I will get back to you on that and we’ll fix it in my next blog or the one after that.
I thought you would never ask. What are the exceptions? There is IMHO only one exception which depends on the Service Level Agreement you have with your customer.
Many customers with a larger internal IT team want to do their “own stuff”. Especially reports.
Off course you can make an agreement with your customer that they can get access to the PTE, but only change report objects, not anything else. This may work to some extent but it ships a bit easier if you create a dedicated “Report Pack” extension that is the responsibility of the customer.
I agree, it is debatable and probably after reading the rest of the Best Practices series you agree that we may as well include the reports in the big “PTE”.
Also, if you use ForNAV, 99.5% of all report changes can be done WITHOUT per tenant extensions so it becomes a non-issue all together.
Organizing the Per Tenant Extension
That is a story for another day, most likely wednesday or thursday as tomorrow I have other plans.
What do you think? Please leave your comments below.
About a month or so ago I did (or try to do) a webinar about best practices for Per Tenant Extensions. I was unhappy about the result but I guess the story should be told and I did promise to get back to you and finish it.
Well, I did and I am getting ready to start sharing what I think a “perfect” per-tenant extension should look like and as always I am looking for feedback and some interactive discussion.
Why just Per Tenant Extensions?
I believe we lost track of what we are good at as a community. I mean that in several ways but for this blog I will stick to Per Tenant Extensions.
Since we got AppSource a few years ago our community started partying away on it. This resulted in a whopping 1.800 apps for Business Central today.
Don’t get me wrong, I love AppSource and just like you I am proud of my contributions. However I do believe 1.800 apps is a bit much for our community.
Since the beginning of the year I have a new job/project for a partner in Denmark that you may never have heard of.
The reason for saying that is because they are (super) vertical and they don’t ever “sell against” other partners. The only competition is outside of the Business Central comminity.
What makes that cool is that I can essentially share anything I learn with you without feeling guilty I give away “IP” to competition. The better Business Central as a product is doing the better we can compete with other branche specific solutions. Win-win.
The Upgrade problem
One of the things I spent most of the september month on was to find and document a way to make upgrades from NAV to Business Central easier.
The good news is that we found a way to make upgrades up to 80% cheaper and to almost completely eliminate the dependency on highly skilled developers which in our ecosystem is the resource type that is the most difficult to find.
As part of the upgrade toolkit that Microsoft provides for Business Central is a piece that was designed for Great Plains (GP) customers migrating. This is called Table Mapping.
If you customize GP, a lot happens on SQL level, much more than in NAV where there is more meta data.
In order to be able to migrate custom GP tables to an extension to you lift a SQL Table to an extension without making it into an extension on premises.
Today was my first day back at the (home) office programming in 2 and a half months. I had already spend a lot of time in the last month or so changing email addresses and other account names to my new name but I only looked at my BC Saas sandboxes today.
When I looked at the admin portal it looked like this:
All environments where set to Not Ready, all options to restart, delete etc. were greyed out.
At first I thought it was me. That I had broken something in my Office 365 subscription.
I was wrong…
It turns out that Microsoft disables all tenants that are not in use for x-amount of time. You have to report a “production outage” to get it up and running again.
Sometimes I just have to write my frustration away in order to clear my head. Don’t expect technical tips and tricks in this post, but maybe some inspiration.
Today I was absolutely flabbergasted. Both on Twitter and on LinkedIn (I am a social media junky) there were actually threads about Microsoft removing the WITH statement in AL. I was litterally like OMG! Go spend your time on the future!!
I’m not going to spend more time on this idiotic topic than this. AL is a horrible programming language and in my future programming career I expect to spend less and less time each year using it.
What does your toolbox look like?
My father-in-law, may he rest in piece, could litterly make anything with his hands. He was a carpenter as a proffession but he could paint, masonry, plastering, pave roads, you name it and he could do it as long as he has the right tools, a good mindset and look at someone do it for a while to pick up some tricks.
As programmers we seem to be married into languages and frameworks and I can only guess why this is the case. In the old world were we came from which was called “On Premises” it was hard to have multiple frameworks, operating systems and databases work side-by-side.
THIS IS NO LONGER TRUE!!! WAKE THE F*CK UP!!
We live in a new world called cloud, preferably the Microsoft Azure cloud and in this new world frameworks, databases and programming languages co-exist side-by-side just fine. Not C/Side is your toolbox but Azure is!
How I am migrating our 200GB+ Database to Business Central with 2000 custom objects? BY USING AZURE!!!!!
– Mark Brummel –
Quote me on that.
For the last year or so I’ve been preparing “our” Business Central SAAS migration and the first thing I did was NOT look at AL code and extensions. The first thing I did was to implement Azure Blob Storage.
The second thing I’ve implemented was Azure Functions replacing C/AL code with C# code.
Number four on my list was Logic Apps to replace Job Queue processes scanning for new files and enhance our EDI
Right now we are implementing Cosmos Database, with Logic Apps and custom API to reduce our database size and improve scalability of our Power BI
FIVE PROJECTS to move to Business Central SAAS WITHOUT a single line of AL code written and we started our project about 18 months ago.
The plan is to move to Business Central SAAS within the next 24 monhts with as few AL customisations as possible.
You know what is funny? The things we are moving OUT of Business Central are the things that make us agile. These are the things that we always have to make ad-hoc changes to why we love C/Side so much.
Please implement a new EDI Interface. Boom, done. With Logic Apps and an Azure Function.
Please change this KPI. Boom, done with Power BI.
Please make this change to the UI. Boom, done with Meta UI.
Oh, and off-course to not forget my friends in Denmark.
Please change the layout of this report. Boom, done with ForNAV!
My frustration is probably not gone, it won’t be gone as long as I read people on the internet still treating AL as if it were C/AL WHICH IT IS NOT!
Fortunately I have a fantastic new job at QBS which allows me to evangalise thinking out of the box and helping people get started with Azure. Only last week in a few hours I got a partner up and running with an Azure Tenant running Business Central on a scalable infrastructure to run performance tests.