- Blog
#Azure
How to plan your Azure budget for 2025-2026 with FinOps
- 11/12/2024
Reading time 5 minutes
One of the leading promises of the cloud era we live in is the infinite scalability of our applications. Few think of testing something that is, by design, built into the underlying infrastructure, and why would they? The cloud is quite resilient in handling changes in load and contains a myriad of tools that support upholding those important SLAs. But the larger the project gets, and the more users are involved, things become more complicated as they always do. And in the future, at some unknown point, the customer will be happy to have tested how much traffic their application can actually handle.
So before falling completely asleep in the arms of the cloud, you might want to ask yourself some of the following questions:
I doubt many of us think of these points often. I should know: The time of writing this was the first I ever thought of any of these things. This stuff is easy to forget since we usually want to focus on providing the customer with that new business feature, the shiny new UI, or whatever. And the fact that these questions aren’t a bigger problem in our projects speaks to the success of Azure (and probably any major) cloud platform.
Both Azure DevOps and Visual Studio used to have performance testing features built-in by Microsoft. However, these were deprecated and then completely retired in April of 2021. The reasons Microsoft cited were that users were not adopting the product anymore and that their “offering had fallen behind”. So I don’t know how good these features were, but their retirement forces us to find new tools. Some of the popular ones available:
Covering every one of these is far outside the scope of this post. I found that each of these can be categorized into either; solutions that you can set up yourself or solutions that are fully SaaS and are usually used through a web UI.
On a side note, it’s a little bizarre how few options Azure offers when considering performance testing. First of all, Microsoft doesn’t have its own product, but there are surprisingly few third-party solutions as well. I know that there used to be more quality products on the marketplace. For example, in this excellent post from 2014, Microsoft MVP Troy Hunt illustrates stress testing his popular site Have I Been Pwned with Loader.io from Azure (reading highly recommended). Loader is not on Azure anymore, and few services like it remain there. I guess that hosting the services on their own sites is just more lucrative to these companies.
So I would say that we have three options for setting up our stress test:
There are pros and cons to all of these. For example, JMeter can be run locally, and I got it to simulate up to about 40000 requests per minute, just from my laptop. That is a surprisingly good number. But with every local solution, you are always limited to the power of your local machine and the bandwidth of your internet connection. Against the cloud, this is not really a viable option, except maybe for testing the smallest applications. I’m not really sure even about that. There is also the cost of learning about the tools you are using. And while, for example, JMeter is not the hardest software to learn, it’s not the easiest with its outdated UI and own concepts. But at least it has a UI, which is not necessarily the default with these tools. And of course, the UI is supposed to only be used to build the tests, not actually run them.
How about setting up your own testing infrastructure in Azure? It’s a little bit of a turnoff when there are SaaS solutions doing what I intend to do anyway, which do it better out of the box. Your own infrastructure is obviously cheaper than using something like K6 or Loadster. Although I think it’s important to remember that the effort you have to spend setting things up is worth money too. But how much is the raw cost? I’m not sure, because I couldn’t get any of the Azure marketplace items to work properly.
I tried to deploy the Locust product found in Azure three times with the maximum amount of slaves (which is 20). It doesn’t work. With different resource groups, regions, etc. it never completed the deployment successfully. Even with 15 or 10 slaves, the deployment failed every time. I did get Locust to work with 3 slaves which is the minimum. That gets us ~4000 requests per minute, which is not enough at all. The distributed JMeter product was better (although you can’t pay for it fully with Azure credits, as is the case with some third-party products). At least it deployed, but after logging into the master VM and running my test the remotes didn’t seem to do much, and then the VM crashed. The previous point about learning to use JMeter is valid here too.
Can you see the problem here? This is the minimum amount of effort required to set up your own testing environment, with a pre-packaged Azure marketplace product. And this is just for a performance test. And it’s been hours. And we are not even considering the DevOps angle yet. And it still doesn’t work.
The best tutorial I’ve seen for setting up your own environment from scratch is this article in Microsoft docs if someone wants to try to set it up. Every one of these approaches seems very work-intensive. And remember, we are just trying to set up a performance test.
So then there are the fully SaaS solutions. Expensive. But interesting and convenient. And that’s why I’m going to focus on testing with this kind of tool. There is just so much offered there that I think the cost is quite justified, at least compared to the other options. But more about the features in part 2.
In part 2, let’s break Azure App Service with Loadster. Enjoy the rest of the summer.
From here you can read about our DevOps offering and learn how cloud adoption is changing the way how small and big businesses are doing software development and operations.
Our newsletters contain stuff our crew is interested in: the articles we read, Azure news, Zure job opportunities, and so forth.
Please let us know what kind of content you are most interested about. Thank you!