Microsoft has recently (where recently is roughly the equivalent of how far back your memory goes) adapted the habit of providing overlapping services and solutions for their services in Azure. Infrastructure-as-Code and the Azure native Bicep language are no exceptions to the rule - quite the contrary. In order to keep up with the tooling and to have some kind of a hunch of the similarities and differences of various ways to share your Bicep templates, here are the solutions.
The good old repository
The simplest way to share Bicep templates in organization is just to put them into a git repository, be that in Azure DevOps, Github, or some other provider. The repository can be cloned, used as a git submodule in other repositories, or for example referenced as pipeline resources in Azure DevOps.
The limitations of repository sharing come with versioning and ease of use; in order to share and maintain multiple versions of the same template, you would have use git branches or tags - or make physical copies of the template and put versioning in the naming. Using git is either a piece of cake for an experienced user, or an endless exercise in agony for someone new to it. Then again, that applies for most of the tooling.
Artifact repositories
I haven't really used this approach myself, but based on an educated guess - the classic consultant approach - you could probably share your bicep Templates by placing them into an artefact, be that a transient artifact like Azure DevOps Pipeline Artifacts, or some kind of Artifact repository like Azure DevOps Universal Packages Artifact Storage. Other options could the Github equivalents, or something like Azure Storage Account.
Using the transient artifacts is not the greatest idea as you are bound to end up with some pretty shady pipeline organization and rules for retaining the said artifacts, but it's good for what it's meant to - sharing templates between stages of pipeline runs. The runs might take some time and involve several pipelines, and with some kind of large enterprise scenarios, you might really have to think about things like versioning. Most likely this would accompany some other way of sharing your templates.
Artifact storages tend to provide things like versioning and metadata, so those should be an option, though not one that the Bicep or any other Infrastructure-as-Code -language support out of the box.
Registries and Template Specs
Native tooling for sharing Bicep modules - or templates - come in two competing and overlapping solutions that are both supported by the language. Private Registry is basically a rebranded Azure Container Registry (well, not even rebranded - it is an ACR, it's just the documentation and a certain AZ CLI command that refers to it as a registry) where you push your templates.
Template Specs is another native Azure resource with almost exactly the same features than private registers have. Both support versioning, both store the ARM-template you build from your bicep code (a step you really don't have to care about, since you can publish your bicep and reference a published module as Bicep - it's just what happens under the hood), and both are referred with the same br-module notation in code.
There are some differences, though:
- Private registry is an ACR, so it comes with both the baggage and features of a full-blown container registry. Template Specs are more of a native Azure Resource with slightly less overhead. But you'll have to assign correct RBAC-roles for both.
- You can view the stored ARM-template from a Template Spec via Azure Portal, but there's no easy way to view a module published into a private registry via the portal. But you can right-click the module bicep module reference to private registry and select "Go to definition", which will show the compiled ARM-template to you. Bicep team is working on developers being able to publish the bicep sources to private registry as well.
- You can deploy a template from Template Specs directly from command line, but you can't deploy a module from Private Registry without a Bicep entrypoint-file referencing the module. In other words, you can publish a full Bicep implementation to Template Specs, but only modules to private registry.
There is also the concept of public registry and you if you are using the CARML resource modules I mentioned in an earlier blog post, you should pay attention to the Azure Verified Modules -initiative. The CARML team has had a long term plan to publish their Bicep modules to Microsoft-hosted public registry, but apparently Microsoft now has decided to put a bit more effort into supporting the curated modules for both Bicep and Terraform.
(You can imagine a user story starting with "As a DevOps Engineer, I need to rewrite all the internal documentation, replacing a solution with a pattern module and…" here. Not that I'm complaining, it's nice to get new shiny things.)
Deployment Environments
Azure Deployment Environments are more aimed for developer self-service than Infrastructure-as-Code development, and aim to abstract the infrastructure layer away from the devs. The service also provides a managed Dev Box -offering, but let's overlook that for now, and concentrate on the IaC-side of things.
The technical solution for sharing templates is putting them into a repository, and then attaching that repository into the service and creating a catalog out of it. You select a repo, a branch, and a folder path, and then the deployment environments travers through the subfolders, searching for a metadata-file that exposes the entrypoint-file, parameters and holds versioning information. You then create a project and scope it to a subscription. Then you create an environment type -tag in the Deployment Environments, and tie environment types into your project. When you grant a developer access to a project, he or she can log into the Microsoft developer portal (which is a thing I totally did not know about before giving the Deployment Environments a spin - and you see a whole not of nothing behind that link until you do the steps mentioned) and provision the infra implemented in a catalog template.
There are several catches here:
- The catalogs currently only support ARM-templates (and Terraform), so for Bicep development you have this extra step of compiling your Bicep into ARM before committing changes into the catalog repository - a step you will forget to do 100 times out of 100, and end up wondering why your changes to not sync in to the Deployment Environments catalog.
- Syncing is also a thing here - the changes to repos are not automatically pushed to the catalog, but a syncing operation needs to be initiated. Though I'm sure it can be automated and timed.
- The deployment is scoped to a resource group which gets it's name from the environment config and the deployment name the developer comes up with, resulting in something that's probably not following any naming standard. The nice thing here is that the lifecycle of that resource group is controlled by the developer portal, so the end-user can provision and de-provision the resources. I'm sure there are multiple ways of how this can spectacularly fail, though.
- You have to expose the parameters that need to be supplied during deployment time via the metadata-file, and the developer portal UI for the parameters is just about what you can expect - small text boxes. So okay for simple things like application name and deployment location, not really usable for anything more complex. So forget about the landing zone self-service deployments for now, if you have any needs to configure, say networking.
- You can, with effort, abstract a lot of things and create a self-service deployment that leverages some other things like modules in private registers, and provisions them into, say, a virtual network. So for scenarios like needing to provision a lot of identical self-service environments for, say, data scientists, this could totally be a thing, alongside the apparent things like self-service testing environments for devs. It's just that you need to consider the trade-offs between hard-coding and parametrizing things.
- Deployment Environments is, I assume, a service still in development, and you are sure to find some undocumented features, like the docs talking about managed identities, only for you to find out that the service creates a service principal per environment tag, and that's what you have to assign the roles to when, for example, wanting to deploy private endpoints.
Azure Developer Client
Azure Developer Client came out in Build 2023 and probably ticks a lot of boxes for devs, while causing a lot of gray hair for the devops unicorns - at least those who have to work in the enterprise-level. The grand idea behind the AZD from Infrastructure-as-code -point of view is that a developer can leverage a growing collection of Bicep or Terraform Templates with which you can initialize a new project with, along with the associated application code. You can then use the CLI configure different environments, and inject environment variables to the IaC-templates, code, and deployment pipelines.
The catch here is that AZD stores the environments and environment variables inside an .azure -folder and to .env -files therein. It creates Azure DevOps and Github Actions yaml-files for pipelines, but those pipelines are essentially executed in a container where AZD is installed into, so the pipelines also execute AZD commands and use the .env -files. You can also provision the pipelines, and even an Azure DevOps project from the CLI. It'll also create a service principal, and probably the Azure DevOps service connection for you (and for Github Actions, the federated credentials).
This setup will probably work splendidly for a developer with his or her own subscription, be that an MSDN-provided one or some kind of sandbox. For more regulated environments, you would probably have a hard time provisioning some of the stuff the AZD promises to do for you, and the pre-made templates only take you so far with something like workload Landing Zone environments. In other words, great tool for learning and trying out things, but don't get too attached to the apparent ease of provisioning things from CLI, since at some point you will probably have to switch to something else.
Or, being the forever pessimistic DevOps Engineer I am, I can certainly anticipate a queue of developers asking why they can't have nice things and that mister Someone Else should probably lift the AZD magic of of the sandbox and into the bigger world.
I did not spot the change to point the provisioning command to a template storage of your own - say, a private repo - so you are probably stuck with the community-sourced templates for now. Provisioning some kind of corporate-sanctioned template would elevate the AZD CLI into the same ballpark with the previously mentioned services, though of course with a similar amount of baggage involved.
Blueprints
While Azure ARM Blueprints have not been a thing for a while now, and they will be deprecated in 2026, they probably need to be mentioned here as well. The real reason I wanted to sneak them into here is to mention that you probably should take a look at the Bicep deployment stacks which are a sort of successor for at least the lifecycle-management part of the Blueprints, even if they are a feature of the language and the Azure Resource Manager and deployment itself, and not a way of sharing code.
I'm sure, though, that soon enough someone dreams up a thing that packages Azure Resources and Policy IaC into a new thing, because why not?
Conclusion
So, there you have it, there are multiple ways of sharing and templating your Bicep Infrastructure as Code, and I encourage you to at least give a spin to the different alternatives. When thinking about adopting something outside a single project scope, you should evaluate at least some of them, and think about versioning the templates and modules, give a thought about change management issues, and embed everything in some governance that lays out the law and ways of the land.
