In the past, I’ve used the method of placing a app_offline.htm file at the root of an IIS website to throw up a maintenance page. This has been available since ASP.NET 2.0 / 3.5. Lately I’ve got used to deploying sites with no downtime approaches, such as rolling and blue/green. I had forgot about app_offline.htm when I recently set up deployment pipelines for some ASP.NET Core sites.
I recently had to come up with a solution to perform a bulk deploy of all apps to an environment using the latest build artifacts. I wanted to use a “wrapper” release definition to orchestrate all of the deployments; similar to how Octopus Deploy’s “Deploy Release Step” works.
However, TFS Release Management currently lacks functionality to create releases from within a build or release definition. There are 3rd party extensions to queue/trigger other builds within a build definition, but nothing to create releases.
The new TFS/Azure Pipelines build and release tasks to run functional tests make setting up a CI pipeline pretty dang easy. The VSTest task can now run unit tests AND functional tests. Microsoft deprecated the “Run Functional Tests” task in favor of consolidating all things test related. In this post I will outline the steps to: trigger a CI build that builds and packages your functional test project, executes the tests, then provides test run results. This will be fairly high level and I’m going to assume you already know the basics of installing agents and creating build/release definitions.
Continue reading Setting up a CI pipeline to run functional tests in TFS 2018 and Azure Pipelines (Formerly VSTS)
The semantic versioning used in all of our TFS/VSTS CI builds uses the predefined variable
Build.BuildID for the buildnumber portion of
major.minor.revision.buildnumber. The build id is used so we have traceability when troubleshooting. We can easily search for the build and see the associated changes.
Major.minor.revision is set in a variable group so it can be shared and updated in one place, across build definitions.
I recently set out to automate the creation of our Windows build servers that run VSTS agents. Previously the build servers were thought of as “snowflake” servers because of all the software components and customization’s that were needed. This was even more reason to use Infrastructure as Code to get rid of manual run books that were previously used to document the creation of a build server. Our Infrastructure team already decided on a tool chain for Infrastructure as Code, which included Chef for Configuration Management.
In order to gracefully deploy releases to our application servers, I wanted to utilize rolling deployments in Octopus Deploy. If you aren’t familiar, rolling deployments slowly roll out a release one instance at a time (vs. all instances at once). The goal of this being reduced overall downtime. To accomplish this, I wrote PowerShell scripts to leverage AWS Auto Scaling Groups (ASG) that are run as part of Octopus deployments.
Last week Microsoft announced “Pipeline as code (YAML)” giving us the ability to store our builds in source control. This allows us to take advantage of Configuration as Code as well as other benefits not available through the Web Interface builds:
My team wanted the ability to populate test data into new data warehouse instances (MySQL on Linux) that are created via Infrastructure as Code (CloudFormation and Chef). They already had the SQL scripts they used for local development, so I would just need to setup a process to package and deploy them. This process would then be automatically triggered when a new instance is created.
This past week I started concentrating on optimizing our release processes. I’ve talked about how our team uses VSTS and Git in a previous post. At the end of a sprint, pull requests are created in all of our repositories to merge to the release branches. As the number of our repositories continues to grow, the number of manual steps to merge for releases grows exponentially. Having worked with the VSTS REST API in the past I knew this wouldn’t be difficult to automate. A build would be run that runs the script below for each of our repositories to create pull requests. Service hooks to Slack would then send notifications so the necessary reviewers can approve the newly created pull requests. This same script can be used post release to merge back to Master branches. I commented on every line of code in the script below so I didn’t have to go into details here 🙂
I previously posted about how I discovered LaunchDarkly and wanted to introduce it at my current employer. See Part 1 here. Our pilot with LaunchDarkly went great. So great that we purchased a subscription.