Build your CI with Azure Pipelines YAML

Pasi Huuhka | 21.01.2020
Reading time 8 min

Have you grown tired of clicking things in the Azure DevOps portal and having to make changes to each environment individually? I know I have, but thankfully I have a solution! In this post, we’re going to learn what Azure Pipelines YAML is, why should you use it and how to get started with creating your build pipelines with it.

What is Azure Pipelines YAML?

Until quite recently, Microsoft’s own DevOps product – Azure DevOps – has been promoting the use of build and release pipelines created with the “Classic” style user interface. Whenever you look at any of their presentations, that is the only thing you’ll see.

While they do make the functionality of the service quite easy to grasp, they come with major inconveniences that pop up once your environment gets more complex. Copying pipelines to another project is a hassle, creating templates for recurring use is needlessly complicated and if you have multiple app environments, you will need to replicate your changes by hand multiple times, especially if your releases are using a separate branch from your dev environment.

Competitors have had solutions to these issues for quite some time, and now Microsoft’s response – YAML based pipelines – has almost reached feature parity with the GUI based approach. I feel that it’s finally worth learning the ins and outs of the new standard. So what exactly are they?

Put simply, they are your pipeline logic stored as code in your repository. Just like the rise of Infrastructure as Code, the next step in complete portability and packaging your product is having the deployment logic shipped with the repository itself. This also opens up so many more possibilities in how you manage your workflow and standardization in your company.

In my opinion, the three biggest benefits of using YAML for pipelines:

  • Pipelines are in your repository – You gain all the benefits of git processes like pull requests, version history, branching, etc. This helps so much with not having to worry if you can introduce improvements in your pipeline without breaking it for everyone while you develop. At the same time, you can be certain that you build and release pipelines will work in the release branch at a later time as well, in case you ever need to return to them. In other words, the state of the code and pipelines are always aligned, which can save hours and hours of troubleshooting later.
  • Code can be reused – YAML pipelines support using templates and giving them parameters – even from separate repositories, allowing you to create standardized building blocks that all your projects can utilize. While this was somewhat doable in the Classic pipelines, YAML really takes this to a next level
  • Parallelizing different parts of the process are much simpler – With a classic, your only option for parallelization is using stages in release pipelines. This often leads to stages with a small amount of functionality and ultimately making the overview page of the releases’ view useless in showing the current status of deployments.

There are some drawbacks to this approach though. As with learning any new technology, building with YAML will take some more work than just doing the same thing with Classic pipelines. The two are not yet equal in their features either, and especially in more complex situations, you notice that some features (like pre/post-deployment gates) are missing. On top of that, this extra amount of work will need to be justified to the customer, and they need to understand the benefits in the long run, which more often than not outweigh the initial cost by a large margin.

Currently, the only option out of the box is creating build pipelines with YAML with only one stage, and that is what this blog post will focus on. To enable more complex pipelines to use for releases, you can turn on the “Multi-stage pipelines” preview feature. I’d recommend you enable this feature, as it is very near being changed to an opt-out flag.

The basic building blocks of a YAML pipeline

On the surface, the YAML pipeline structure does not differ much from its’ Classic counterpart:

 

  • Stages are major divisions in a pipeline: “build the app package”, “run these tests”, “deploy to production”. These are the same thing that Classic release pipelines are divided into. A build pipeline in Classic is a representation of a single stage. Each stage can contain one to many Jobs
  • Jobs are items that are assigned to a single agent machine in the agent pool. Both Jobs and Stages can be arranged to depend on other similar level items or run in parallel, creating dependency graphs. New to YAML is deployment-type jobs that are pointed to a specific environment with a specific deployment strategy.
  • Steps are linear combinations of different functionality, like running a script or a task, just like in the Classic version. There are multiple different keyword shortcuts to specifying a task, like “download” for calling the Download Pipeline Artifacts task. Using these can be a bit confusing at times, as the documentation is not always very clear on how exactly one does that.

In a YAML file, a basic pipeline would look something like this:

trigger:
- master

stages: 
- stage: stage1
  jobs:
  - job: job1
    pool:
      vmImage: 'windows-latest'
    steps:
    - task: NuGetToolInstaller@1
    - task: NuGetCommand@2
      inputs:
        restoreSolution: 'mysolution.sln'
    - script: echo Hello, world!
      displayName: 'Run a one-line script'

- stage: stage2
  jobs:
  - job: importantjob
    pool:
      vmImage: 'windows-latest'
    steps:
      - pwsh: 'write-output "I do nothing"'

All of these can also be references to a template, with parameters.

Template caller:

trigger:
- master

stages: 
- template: template-stage.yaml
  parameters:
    stagename: 'MyStage1'
    vmImage: 'windows-latest'
    restoreSolution: 'mysolution.sln'

Template content:

parameters:
  stageName: ''       # should fail if not given
  vmImage: 'windows-2019'  # default values if not given in caller
  restoreSolution: ''

stages: 
- stage: ${{ parameters.stageName }}
  pool:
    vmImage: ${{ parameters.vmImage }}
  jobs:
  - job: job1
    steps:
    - task: NuGetToolInstaller@1
    - task: NuGetCommand@2
      inputs:
        restoreSolution: '${{ parameters.restoreSolution }}'
    - script: echo Hello, world!
      displayName: 'Run a one-line script'

Not all of these are required for a functioning template. The minimum you can work with is a simple list of steps. Once you get further in studying the different levels in use, be sure to check the documentation for each building block.

Okay, so where do I start?

Luckily, Microsoft has provided us with a somewhat confusing pile of documentation we can utilize, along with some starter templates we can use as a base. They also have created an extension for VSCode that you can use, but the best way is still to use the Azure DevOps portal, as it has a very useful task assistant tool to create the individual step configurations. This makes it especially easy to learn the syntax.

Here’s a short example of how to create your first pipeline.

  • Create a new pipeline just like before, choose where your code is located.
  • Azure DevOps analyzes your code and suggests some basic templates you can start with. Select the one you need. (You can also select an existing YAML file in your repo here)
  • Set the location where the file will be located.

I started with an ASP.NET template, as it already had most of the variables configured that I needed. The trigger was also set to run every time the master is updated. However, I used the assistant functionality to configure dotnet core CLI tasks instead of the suggested ones and specified the windows-2019 agent pool instead of the default windows-latest. In addition to this, I added display name fields so the logs would be clearer. In the end, I used the new PublishPipelineArtifact task to publish my zipped projects for a release pipeline to utilize.

Here’s the full code of my finalized pipeline:

trigger:
- master

pool:
  vmImage: 'windows-2019'

variables:
  buildConfiguration: 'Release'

steps:
- task: DotNetCoreCLI@2
  displayName: "Dotnet Restore"
  inputs:
    command: 'restore'
    projects: '**/*.csproj'
    feedsToUse: 'select'
- task: DotNetCoreCLI@2
  displayName: "Dotnet Build"
  inputs:
    command: 'build'
    projects: '**/*.csproj'
    arguments: '--no-restore --configuration $(buildConfiguration)'
- task: DotNetCoreCLI@2
  displayName: "Dotnet Publish"
  inputs:
    command: 'publish'
    publishWebProjects: true
    arguments: '-o $(Pipeline.Workspace)/publish --no-build'
- task: PublishPipelineArtifact@1
  displayName: "Publish Artifacts"
  inputs:
    targetPath: '$(Pipeline.Workspace)/publish'
    artifact: 'web'
    publishLocation: 'pipeline'

If you want to, you could also create the variables using the Variables button in the top right-hand corner. This will allow you to specify them during queue time to whatever you desire. You would still point to them in the pipeline just like any other variable, and the dialogue also gives you some more examples of usage. I tend to keep the variables in the pipeline file so that as much of the logic as possible moves with the repo.

Then you are able to just save and run the pipeline while selecting whether to commit directly to the branch you selected, or to create a new branch instead. This selection depends on your git practices, though I would strongly recommend using branching and pull requests here.

Following the status of the run is just as easy as before, and provides the same amount of logging.

Going further

While the previous example is very simple and only includes the very basics, it gives a decent place to start building your own pipelines.

In case you already have readymade pipelines, the easiest way to convert them to YAML is to use the View Yaml-button found in each Job in build templates, and each individual task in all Classic templates. The Job-level button is especially useful as it gives the ability to convert almost the whole build pipeline with very little work required.

 

In case you are yearning to learn more, here are some features you should take a look at. They will pop up in most of the more complicated pipelines you will be creating. Especially once you start meddling in multi-stage templates and doing releases: