Add an Application Pipeline to the Serverless Terraform Pipeline

Introduction

Two blogs ago, we created a Serverless Pipeline to deploy Terraform projects. If you haven’t read it, I highly recommend that you go through it before continuing. Here is the link to it https://nivleshc.wordpress.com/2023/03/28/use-aws-codepipeline-aws-codecommit-aws-codebuild-amazon-simple-storage-service-amazon-dynamodb-and-docker-to-create-a-pipeline-to-deploy-terraform-code/.

I generally classify deployments into two categories – infrastructure and application. What is the difference, you may ask?

Infrastructure deployments are used to provision resources that are required to run your applications (or workloads), this could be Amazon EC2 instances, Amazon S3 buckets, or an Amazon Elastic Kubernetes Service cluster (as we did in the previous blog).

An application deployment, as the name suggests, is used to deploy your applications (or workloads). The application will run on resources provisioned using an infrastructure deployment.

Normally, these two deployments are owned by separate teams, the infrastructure (or operations) team and the application (or developers) team. However, when we adopt DevOps practices and methodologies, these teams don’t remain siloed in their own worlds, instead they are involved in the whole deployment cycle (from infrastructure to application deployment).

If you look closely, infrastructure and applications have different lifecycles. Most of the infrastructure is deployed once and then patched every few weeks (or sooner when critical or zero-day vulnerabilities are found) for security reasons (lets ignore the end-of-life migrations for now).

Applications, on the other hand, have a much higher deployment cadence. This is because of the number of new features/fixes that are being rolled out by the developers.

In this scenario, it makes sense to keep the infrastructure and application Terraform state files separate, instead of combining into one. To reduce errors, add audibility and repeatability, a CICD pipeline is a must.

In this blog, we will extend the Serverless Pipeline that we had previously created, by adding an application deployment pipeline to it. This will allow us to deploy infrastructure and applications using their own dedicated pipelines.

Have I got you interested? Lets dive in.

High Level Architecture

To visualise the changes that we will be making, below is the updated high-level architecture diagram.

The blue rectangle (named Application Pipeline) contains the new resources that will be provisioned. The application pipeline is very similar to the infrastructure pipeline.

To better understand the how the application pipeline will be used, lets go through a typical deployment scenario. The steps are labelled in the architectural diagram above, and explained below.

B1 – An application developer (Application Team) pushes changes to the the Application AWS CodeCommit repository.

B2 – An Amazon EventBridge rule detects the new commits contained in the push.

B3 – The Amazon EventBridge rule triggers the Application Pipeline (AWS CodePipeline pipeline).

B4 – The AWS CodePipeline pipeline starts with the Source stage, where it retrieves the artifacts from the Application AWS CodeCommit repository, zips it and stores it into the Amazon S3 bucket.

B5 – AWS CodePipeline then provides the zipped artifacts to the Terraform Plan stage. This stage contains an Amazon CodeBuild project, which takes the artifacts (Terraform project files) and calculates the proposed changes by performing a Terraform plan command.

B6 – The pipeline then proceeds to the Change Approval stage.

B7 – In the Change Approval stage, an email is sent to the Application Change Approver, using an Amazon Simple Notification Service (SNS) Topic, requesting him/her to either approve or reject the proposed changes.

B8 – If the Application Change Approver rejects the proposed changes, the pipeline doesn’t deploy anything and exits with a failure.

B9 – If the Application Change Approver approves the proposed changes, the pipeline proceeds to the Terraform Apply stage, where the proposed changes are deployed into the AWS account.

Lets go through the code to better understand the inner workings of the application pipeline.

Updates to the Serverless Pipeline repository

Before we proceed to the code walk-through, lets quickly look at the changes that have been pushed to the Serverless Pipeline repository.

  • infrastructure pipeline code has been removed from main.tf and placed into infra_pipeline.tf. This provides easy access to the infrastructure pipeline resources code, keeping main.tf for resources that are common between both the infrastructure and application pipelines.
  • IAM roles and policies have been updated to cater for both infrastructure and application resources.
  • application pipeline code has been added to app_pipeline.tf.
  • application pipeline will inject the following files into application repository artifacts during the Terraform Plan stage, thereby taking care of the backend that will be used for the application Terraform state file (Amazon S3 bucket, Amazon DynamoDB table).
    • docker-compose.yml
    • .env.app
    • _backend.tf
  • Note: application pipeline expects the application Terraform project files on the root of the Amazon CodeCommit repository (and not inside a folder called Terraform).
  • logs from the infrastructure and application AWS CodeBuild projects will now be stored in the following Amazon CloudWatch Loggroups with the following Logstream names
    • infrastructure
      • AWS CodeBuild Project: infra_plan
        • Amazon CloudWatch Loggroup: /{project_name}/{env}/infra/codebuild
        • Amazon CloudWatch Logstream: {project_name}_{env)_infra_plan
      • AWS CodeBuild Project: infra_apply
        • Amazon CloudWatch Loggroup: /{project_name}/{env}/infra/codebuild
        • Amazon CloudWatch Logstream: {project_name}_{env)_infra_apply
    • application
      • AWS CodeBuild Project: app_plan
        • Amazon CloudWatch Loggroup: /{project_name}/{env}/app/codebuild
        • Amazon CloudWatch Logstream: {project_name}_{env)_app_plan
      • AWS CodeBuild Project: app_apply
        • Amazon CloudWatch Loggroup: /{project_name}/{env}/app/codebuild
        • Amazon CloudWatch Logstream: {project_name}_{env)_app_apply
  • added/updated admin scripts to manage the application pipeline and application resources.
  • the overall code has been tidied up, to remove unnecessary variables.

Walkthrough of the Code

Important: If you had previously cloned the Serverless Terraform pipeline repository, please pull the new changes before continuing. Otherwise, follow the steps below to start fresh.

  1. Clone the Serverless Terraform Pipeline’s GitHub repository using the following command.

git clone https://github.com/nivleshc/blog-create-pipeline-to-deploy-terraform-code.git

2. Open the folder into which the repository has been cloned, go into the sub-folder named terraform and open the file called app_pipeline.tf in your favorite IDE.

3. The first resource block in the file defines the AWS CodeCommit repository that will be created. This will be used to store the application’s terraform code.

resource "aws_codecommit_repository" "app_repo" {
repository_name = "${var.project}_${var.env}_app"
description = "The AWS CodeCommit repository where the application code will be stored."
default_branch = var.codecommit_app_repo_default_branch_name
}

4. The next resource block provisions the AWS CodeBuild project that will be used to run Terraform plan inside the Terraform Plan pipeline stage. This AWS CodeBuild project injects the following environment variables, which can be used inside the application’s terraform code (for example, to create a terraform remote state data source to access the outputs from the infrastructure Terraform state file).

  • TF_VAR_env
  • TF_VAR_project_name
  • TF_VAR_s3_bucket_name
  • TF_VAR_s3_bucket_key_prefix
  • TF_VAR_dynamodb_lock_table_name

For reference, the infrastructure Terraform state file is stored in the following location {TF_VAR_s3_bucket_name}/{TF_VAR_s3_bucket_key_prefix}/{TF_VAR_project_name}_infra/TF_VAR_env/terraform.tfstate

resource "aws_codebuild_project" "app_plan_project" {
name = "${var.project}_${var.env}_app_plan"
description = "AWS CodeBuild Project to display the proposed application changes"
build_timeout = "5"
concurrent_build_limit = 1
service_role = aws_iam_role.codebuild_service_role.arn
artifacts {
type = "NO_ARTIFACTS"
}
environment {
compute_type = var.codecommit_compute_type
image = var.codecommit_container_image_name
type = var.codecommit_container_type
image_pull_credentials_type = "CODEBUILD"
privileged_mode = true
environment_variable {
name = "TF_VAR_env"
type = "PLAINTEXT"
value = var.env
}
environment_variable {
name = "TF_VAR_project_name"
type = "PLAINTEXT"
value = var.project
}
environment_variable {
name = "TF_VAR_s3_bucket_name"
type = "PLAINTEXT"
value = var.s3_bucket_name
}
environment_variable {
name = "TF_VAR_s3_bucket_key_prefix"
type = "PLAINTEXT"
value = var.s3_bucket_key_prefix
}
environment_variable {
name = "TF_VAR_dynamodb_lock_table_name"
type = "PLAINTEXT"
value = var.dynamodb_lock_table_name
}
}
logs_config {
cloudwatch_logs {
group_name = "/${var.project}/${var.env}/app/codebuild"
stream_name = "${var.project}_${var.env}_app_plan"
}
}
source {
type = "CODECOMMIT"
location = aws_codecommit_repository.app_repo.clone_url_http
git_clone_depth = 1
buildspec = <<-EOT
version: 0.2
env:
exported-variables:
– TERRAFORM_PLAN_STATUS
phases:
pre_build:
commands:
# create the docker-compose.yml
– |
cat << EOF > docker-compose.yml
version: "2.1"
services:
terraform_container:
image: hashicorp/terraform:1.4.2
network_mode: bridge
volumes:
– .:/terraform
env_file:
– .env.app
EOF
# create the .env.app file for docker compose
– |
cat << EOF > .env.app
AWS_REGION=ap-southeast-2a
AWS_ACCESS_KEY_ID=\$${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY=\$${AWS_SECRET_ACCESS_KEY}
AWS_SESSION_TOKEN=\$${AWS_SESSION_TOKEN}
EOF
# create terraform backend file
– |
cat << EOF > _backend.tf
terraform {
backend "s3" {
region = "ap-southeast-2"
}
}
EOF
build:
commands:
– echo retrieve container credentials
– credentials=$(curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)
– export credentials
– export AWS_ACCESS_KEY_ID=$(echo "$${credentials}" | jq -r '.AccessKeyId')
– export AWS_SECRET_ACCESS_KEY=$(echo "$${credentials}" | jq -r '.SecretAccessKey')
– export AWS_SESSION_TOKEN=$(echo "$${credentials}" | jq -r '.Token')
# run terraform init
– |
docker-compose run –rm terraform_container -chdir=./terraform init \
-backend=true \
-backend-config="bucket=$${TF_VAR_s3_bucket_name}" \
-backend-config="key=terraform.tfstate" \
-backend-config="encrypt=true" \
-backend-config="dynamodb_table=$${TF_VAR_dynamodb_lock_table_name}" \
-backend-config="workspace_key_prefix=$${TF_VAR_s3_bucket_key_prefix}/$${TF_VAR_project_name}_app"
# run terraform plan
– |
docker-compose run –rm terraform_container -chdir=./terraform workspace select $${TF_VAR_env} || \
docker-compose run –rm terraform_container -chdir=./terraform workspace new $${TF_VAR_env} ; \
docker-compose run –rm terraform_container -chdir=./terraform plan -out=$${TF_VAR_project_name}_plan.tfplan -detailed-exitcode ; \
TERRAFORM_PLAN_STATUS=$?
– echo "TERRAFORM_PLAN_STATUS=$${TERRAFORM_PLAN_STATUS}"
post_build:
commands:
# unset all sensitive environment variables
– unset AWS_ACCESS_KEY_ID
– unset AWS_SECRET_ACCESS_KEY
– unset AWS_SESSION_TOKEN
artifacts:
files:
– '**/*'
name: infra_artifacts_$(date +%Y-%m-%d)
EOT
git_submodules_config {
fetch_submodules = true
}
}
}

5. As you might have guessed, the next resource block is to provision an AWS CodeBuild project that will perform Terraform apply in the Terraform Apply stage of the pipeline. This will be used to deploy the changes into the AWS Account.

resource "aws_codebuild_project" "app_apply_project" {
name = "${var.project}_${var.env}_app_apply"
description = "AWS CodeBuild Project to apply the proposed application changes"
build_timeout = "60"
concurrent_build_limit = 1
service_role = aws_iam_role.codebuild_service_role.arn
artifacts {
type = "NO_ARTIFACTS"
}
environment {
compute_type = var.codecommit_compute_type
image = var.codecommit_container_image_name
type = var.codecommit_container_type
image_pull_credentials_type = "CODEBUILD"
privileged_mode = true
environment_variable {
name = "TF_ENV"
type = "PLAINTEXT"
value = var.env
}
environment_variable {
name = "TF_PROJECT_NAME"
type = "PLAINTEXT"
value = var.project
}
}
logs_config {
cloudwatch_logs {
group_name = "/${var.project}/${var.env}/app/codebuild"
stream_name = "${var.project}_${var.env}_app_apply"
}
}
source {
type = "CODECOMMIT"
location = aws_codecommit_repository.app_repo.clone_url_http
git_clone_depth = 1
buildspec = <<-EOT
version: 0.2
phases:
build:
commands:
– echo retrieve container credentials
– credentials=$(curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)
– export credentials
– export AWS_ACCESS_KEY_ID=$(echo "$${credentials}" | jq -r '.AccessKeyId')
– export AWS_SECRET_ACCESS_KEY=$(echo "$${credentials}" | jq -r '.SecretAccessKey')
– export AWS_SESSION_TOKEN=$(echo "$${credentials}" | jq -r '.Token')
# run terraform apply
– |
docker-compose run –rm terraform_container -chdir=./terraform workspace select $${TF_ENV}; \
docker-compose run –rm terraform_container -chdir=./terraform apply $${TF_PROJECT_NAME}_plan.tfplan ; \
TERRAFORM_APPLY_STATUS=$?
– echo "TERRAFORM_APPLY_STATUS=$${TERRAFORM_APPLY_STATUS}"
post_build:
commands:
# unset all sensitive environment variables
– unset AWS_ACCESS_KEY_ID
– unset AWS_SECRET_ACCESS_KEY
– unset AWS_SESSION_TOKEN
# set the stage exitcode to the status of the terraform apply
– exit $TERRAFORM_APPLY_STATUS
EOT
git_submodules_config {
fetch_submodules = true
}
}
}

6. The next two resource blocks define resources that will be used for change approvals. The first resource block is used to create an Amazon Simple Notification Service (SNS) topic, to which application change approval requests will be sent. The second resource block subscribes the application change approver’s email address to the Amazon SNS topic.

resource "aws_sns_topic" "app_pipeline_approval_requests" {
name = "${var.project}_${var.env}_app_pipeline_approval_requests"
}
resource "aws_sns_topic_subscription" "app_approver_subscription" {
topic_arn = aws_sns_topic.app_pipeline_approval_requests.arn
protocol = "email"
endpoint = var.app_approver_email
}

7. Now we get the engine room. The next resource block defines the application AWS CodePipeline pipeline. It contains four stages, as described below.

  • Source – this stage retrieve the artifacts from the application AWS CodeCommit repository, zip’s it and stores it into an Amazon S3 bucket.
  • APP_TF_PLAN – this stage uses the zipped artifacts stored in the Amazon S3 bucket, and the app_plan AWS CodeBuild project to calculate the proposed changes that will be done. to the AWS Account.
  • APP_TF_CHANGE_APPROVAL – this stage sends an email to the application change approver, requesting him/her to either accept or reject the changes. A link to the proposed changes is also included in the email.
  • APP_TF_APPLY – this stage uses the app_apply AWS CodeBuild project to apply the proposed changes to the AWS Account (when approver approves the changes)
resource "aws_codepipeline" "app_pipeline" {
name = "${var.project}_${var.env}_app_pipeline"
role_arn = aws_iam_role.codepipeline_role.arn
artifact_store {
location = data.aws_s3_bucket.codepipeline_artifacts_s3_bucket.id
type = "S3"
encryption_key {
id = aws_kms_key.codepipeline_kms_key.id
type = "KMS"
}
}
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "AWS"
provider = "CodeCommit"
version = "1"
output_artifacts = ["source_output"]
namespace = "SourceVariables"
configuration = {
RepositoryName = aws_codecommit_repository.app_repo.id
BranchName = "main"
PollForSourceChanges = "false"
}
}
}
stage {
name = "APP_TF_PLAN"
action {
name = "APP_TF_PLAN_ACTION"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
input_artifacts = ["source_output"]
output_artifacts = ["app_tf_plan_output"]
namespace = "AppTFPlanVariables"
version = "1"
configuration = {
ProjectName = aws_codebuild_project.app_plan_project.name
}
}
}
stage {
name = "APP_TF_CHANGE_APPROVAL"
action {
name = "ApprovalAction"
category = "Approval"
owner = "AWS"
version = "1"
provider = "Manual"
input_artifacts = []
output_artifacts = []
configuration = {
NotificationArn = aws_sns_topic.app_pipeline_approval_requests.arn
CustomData = "\nApplication Pipeline approval request for CommitId: #{SourceVariables.CommitId} #{SourceVariables.CommitMessage} \nTerraform Plan ExitCode: #{AppTFPlanVariables.TERRAFORM_PLAN_STATUS}"
}
run_order = 1
}
}
stage {
name = "APP_TF_APPLY"
action {
name = "INFRA_TF_APPLY_ACTION"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
input_artifacts = ["app_tf_plan_output"]
output_artifacts = ["app_tf_apply_output"]
namespace = "AppTFApplyVariables"
version = "1"
configuration = {
ProjectName = aws_codebuild_project.app_apply_project.name
}
}
}
}

8. The next two resource blocks are used to create an Amazon EventBridge rule that will detect any pushes to the application AWS CodeCommit repository, and on such pushes, it will trigger the application AWS CodePipeline pipeline.

resource "aws_cloudwatch_event_rule" "trigger_app_pipeline" {
name = "${var.project}_${var.env}_app_pipeline_trigger"
description = "Trigger the Application Pipeline"
event_pattern = <<PATTERN
{
"source": [
"aws.codecommit"
],
"detail-type": [
"CodeCommit Repository State Change"
],
"resources": [
"${aws_codecommit_repository.app_repo.arn}"
],
"detail": {
"event": [
"referenceCreated",
"referenceUpdated"
],
"referenceType": [
"branch"
],
"referenceName": [
"${var.codecommit_app_repo_default_branch_name}"
]
}
}
PATTERN
}
resource "aws_cloudwatch_event_target" "app_pipeline" {
target_id = "${var.project}_${var.env}_app_pipeline_target"
rule = aws_cloudwatch_event_rule.trigger_app_pipeline.id
arn = aws_codepipeline.app_pipeline.arn
role_arn = aws_iam_role.cloudwatch_events_role.arn
}

9. The last resource block outputs the https clone url for the application AWS CodeCommit repository. This will be used to clone the application repository to a local folder and to push changes to it.

output "app_repo_https_clone_url" {
description = "The https clone url for the application CodeCommit repository"
value = aws_codecommit_repository.app_repo.clone_url_http
}

Deploying the solution

The instructions listed below are specific to only deploying the application pipeline. They are in addition to those that were provided for deploying the initial Serverless Pipeline. The initial Serverless Pipeline deployment details are available at https://nivleshc.wordpress.com/2023/03/28/use-aws-codepipeline-aws-codecommit-aws-codebuild-amazon-simple-storage-service-amazon-dynamodb-and-docker-to-create-a-pipeline-to-deploy-terraform-code/.

  1. Open Makefile from the root of the folder containing the cloned files from the GitHub repository and update the following additional item

<appapproveremailaddress> – add the Application Change approver’s email address

2. Assuming you have already deployed the prerequisites (as outlined in the Serverless Pipeline blog) (and optionally deployed the Infrastructure pipeline), continue on with the following commands to provision the application pipeline.

Note: the infrastructure and application pipeline code is stored in the same folder. Both will be created (or updated) when the following commands are run.

make terraform_init – this initialises the Terraform project

make terraform_plan – this calculates and displays the proposed changes to the Serverless Pipeline

make terraform_apply – once satisfied with the proposed changes shown by “make terraform_plan” run this to apply the changes to your AWS Account.

A confirmation email will be sent to the application change approver’s email address (and infrastructure change approver’s email address, if the infrastructure pipeline is also getting crated). Ensure they click on the included link to confirm the subscription, otherwise they will not get any approval emails.

Once successfully deployed, you can confirm, using the AWS Management Console, the following additional resources that were created for the application pipeline.

  • AWS CodeCommit repository for application code
  • AWS CodeBuild project for running the APP_TF_PLAN stage
  • AWS CodeBuild project for running the APP_TF_APPLY stage
  • AWS CodePipeline pipeline for application
  • Amazon Simple Notification Service Topic for application change approvals
  • Amazon EventBridge rule to trigger the application AWS CodePipeline pipeline

Cleaning up

Once you have finished with the solution that was deployed using this blog, don’t forget to destroy all the resources. Otherwise, you might get unnecessarily charged.

To destroy the resources, follow the steps outlined below.

  1. Open the Makefile from inside the admin folder. Update the following variables using the same values that were used when deploying the solution.
    • <myenv> – change this to the environment name that was used when deploying the solution.
    • <myprojectname> – change this to the project name that was used when deploying the solution.
    • <mys3bucketname> – change this to the Amazon S3 bucket name that was used when deploying the solution.

2. The cleanup has to performed in a strict order, otherwise, you might get orphaned resources, which would require a manual cleanup (this can get quite messy and tedious).

Below is the order we will follow.

  • destroy all application resources (if any)
  • destroy all infrastructure resources (if any)
  • destroy all pipeline resources
  • destroy all prerequisite resources

Now for the actual commands.

  • follow the steps below to delete any resources deployed using the application pipeline
    • make terraform_app_init
    • make terraform_app_show
    • make terraform_app_destroy
  • follow the steps below to delete any resources deployed using the infrastructure pipeline
    • make terraform_infra_init
    • make terraform_infra_show
    • make terraform_infra_destroy
  • follow the steps below to delete the infrastructure and application pipeline, along with all resources that were created for it
    • make terraform_pipeline_init
    • make terraform_pipeline_show
    • make terraform_pipeline_destroy
  • lastly follow the steps below to delete the prerequisite resources that were created
    • make terraform_prereq_init
    • make terraform_prereq_show
    • make terraform_prereq_destroy

Thats it folks! I hope you found this blog insightful.

In the next blog, we will use the Application pipeline to deploy our Hugo container (that we created three blogs ago) into our Amazon EKS cluster.

Till then, stay safe!