Deploy a Hugo container inside an Amazon EKS cluster using the Serverless Terraform Pipeline

Introduction

Over the last few blogs, we have created our own customised Hugo docker image, a Serverless Pipeline to deploy Terraform projects and deployed an Amazon Elastic Kubernetes Service (Amazon EKS) cluster using the infrastructure pipeline from within the Serverless Terraform pipeline. We then extended the Serverless Terraform pipeline by adding an Application pipeline to it.

In this blog, we will use the Application pipeline (from within the Serverless Terraform pipeline) to deploy the customised Hugo container inside our Amazon EKS cluster. This will be the first workload inside our Amazon EKS cluster! To make the Hugo service accessible, we will use ingress rules to expose it outside the cluster.

Keen to get started? Lets begin.

High Level Architecture

To get a good understanding of what we will be deploying in this blog, lets zoom out and look at the solution as a whole. The diagram below shows the entire solution, including the deployment that will be done as part of this blog (the purple arrow).

As mentioned in the previous blog, the resources inside the blue rectangle are components that make up the application pipeline. We will use it to deploy our customised Hugo container inside the Amazon EKS cluster.

Lets go through the deployment steps (as labelled in the diagram above).

B1 – An application developer (Application Team) pushes changes to the the Application AWS CodeCommit repository.

B2 – An Amazon EventBridge rule detects the new commits contained in the push.

B3 – The Amazon EventBridge rule triggers the Application Pipeline (AWS CodePipeline pipeline).

B4 – The AWS CodePipeline pipeline starts with the Source stage, where it retrieves the artifacts from the Application AWS CodeCommit repository, zips it and stores it into the Amazon S3 bucket.

B5 – The AWS CodePipeline pipeline then provides the zipped artifacts to the Terraform Plan stage. This stage contains an Amazon CodeBuild project, which takes the artifacts (Terraform project files) and calculates the proposed changes by performing a Terraform plan command on them.

B6 – The pipeline then proceeds to the Change Approval stage.

B7 – In the Change Approval stage, an email is sent to the Application Change Approver, using an Amazon Simple Notification Service (SNS) Topic, requesting him/her to either approve or reject the proposed changes.

B8 – If the Application Change Approver rejects the proposed changes, the pipeline doesn’t deploy anything and exits with a failure.

B9 – If the Application Change Approver approves the proposed changes, the pipeline proceeds to the Terraform Apply stage, where the proposed changes are deployed into the AWS account.

B10 – The Terraform Apply stage connects to the Amazon EKS cluster and deploys the following resources

  • a new Kubernetes namespace for Hugo
  • a Kubernetes deployment inside the Hugo namespace, which will provision the Hugo pods
  • a Kubernetes service that will be used to access the Hugo pods
  • a Kubernetes ingress rule, which will use an AWS Load Balancer to expose the Hugo service to the internet

B11 – During deployment, Kubernetes connects to Docker Hub, to download the customised Hugo container image (the Hugo docker image we created four blogs ago, has been published to Docker Hub). This will be used to create the Hugo pods.

B12 – After the deployment has been completed successfully and the ingress rule has been created, the Hugo service ingress rule hostname will be displayed. A user can use this to connect over the internet, to the Hugo service.

Updates to the Serverless Pipeline Repository

Since the last blog, the following updates have been made to Serverless Pipeline Repository.

  • there was a bug, due to which the AWS CodeBuild project TF_INFRA_PLAN was not exposing the following environment variables to the Infrastructure Terraform Project that it was processing. This is now fixed. Users can use these Terraform variables by declaring them in their variables.tf file with the corresponding variable declarations.
    • env
    • project_name
    • s3_bucket_name
    • s3_bucket_key_prefix
    • dynamodb_lock_table_name
  • the Terraform variable project has been renamed to project_name. This makes it consistent with the name used in other parts of the project (for instance, in the Makefile).
  • The admin terraform code has been updated. It can now be used to manage the infrastructure and application resources deployed inside an Amazon Elastic Kubernetes Service cluster.

Updates to the Infrastructure Code for deploying the Amazon EKS Cluster

The code for deploying the infrastructure using the Infrastructure pipeline has also been given an uplift. The code is located in the GitHub repository at https://github.com/nivleshc/blog-tf-pipeline-eks-cluster and was initially discussed in this blog https://nivleshc.wordpress.com/2023/06/12/create-an-amazon-elastic-kubernetes-service-cluster-using-a-serverless-terraform-pipeline/.

Below is a summary of the changes that have been made.

  • an ingress AWS load balancer controller has been added to the Amazon EKS cluster.
  • the subnets have been tagged with “kubernetes.io/role/elb” or “kubernetes.io/role/internal-elb” to denote if they are internal or internet-facing.
  • an openid connect provider has been added to the Amazon EKS cluster. This allows using AWS IAM roles for Kubernetes service accounts.
  • the following Amazon EKS control plane logs have been enabled. These will be written to Amazon CloudWatch logs.
    • API Server
    • Audit
    • Authenticator
    • Controller manager
    • Schedular

Walkthrough of the Code

Lets go through the code, to understand how the deployment will be carried out.

  1. Clone the GitHub repository for this blog, using the following command.
git clone https://github.com/nivleshc/blog-tf-pipeline-deploy-hugo-container.git

2. Open the above folder, and then open the file called main.tf in your favourite IDE. This file contains the resources that will be created inside the Amazon EKS cluster. Lets go through each of the resource blocks.

3. The first resource block is used to create a Kubernetes namespace for Hugo. A Kubernetes namespace is used to virtually sub-divide your cluster into sub-clusters. We will use this namespace for deploying our Hugo resources.

resource "kubernetes_namespace" "hugo" {
metadata {
name = "${var.env}-${local.hugo.name}"
}
}

4. The next resource block defines the Kubernetes deployment that will be used to create the Hugo pods. It contains the number of replicas that will be created, along with the container image that will be used to create the pods. The values for each of the configuration items below is defined in locals.tf, which we will go through abit later on.

resource "kubernetes_deployment" "hugo" {
metadata {
name = "${var.env}-${local.hugo.name}-deployment"
labels = {
app = "${var.env}-${local.hugo.name}"
}
namespace = kubernetes_namespace.hugo.id
}
spec {
replicas = local.hugo.replica_count
selector {
match_labels = {
app = "${var.env}-${local.hugo.name}"
}
}
template {
metadata {
labels = {
app = "${var.env}-${local.hugo.name}"
}
}
spec {
container {
image = local.hugo.image_name
name = local.hugo.name
}
}
}
}
}

5. Kubernetes is an orchestrator. It ensures that the specified number of pods are available, all the time. As you can imagine, pods might encounter an error, due to which they become unhealthy. In such scenarios, Kubernetes terminates the unhealthy pods and replaces them with new ones. Now, this will pose a challenge if you were to try connecting to the same pod after it had been terminated. To ensure that there is a consistent endpoint that can be used to connect to the pods, Kubernetes provides a service object. In its simplest form, a service is like a load balancer, which has a static url, and allows connections to be passed through to the backend instances. In our case, it will provide a static endpoint, which we will use to access the Hugo pods. The next resource block defines the Hugo service.

resource "kubernetes_service" "hugo" {
metadata {
name = "${var.env}-${local.hugo.name}-service"
namespace = kubernetes_namespace.hugo.id
}
spec {
selector = {
app = kubernetes_deployment.hugo.spec[0].template[0].metadata[0].labels.app
}
session_affinity = local.hugo.service.session_affinity
port {
port = local.hugo.service.port
target_port = local.hugo.service.target_port
}
type = local.hugo.service.type
}
}

6. The last resource block in main.tf creates a Kubernetes ingress rule. Internally, this will provision an AWS load balancer to expose the Kubernetes Hugo service to the internet, thereby making it accessible from outside the cluster.

resource "kubernetes_ingress_v1" "hugo" {
metadata {
name = local.hugo.name
namespace = kubernetes_namespace.hugo.id
annotations = {
"alb.ingress.kubernetes.io/scheme" = local.hugo.ingress.annotations.scheme
"alb.ingress.kubernetes.io/target-type" = local.hugo.ingress.annotations.target_type
}
labels = {
"app.kubernetes.io/name" = local.hugo.name
}
}
spec {
ingress_class_name = local.hugo.ingress.class_name
rule {
http {
path {
path = local.hugo.ingress.rule.http.path.path
backend {
service {
name = kubernetes_service.hugo.metadata[0].name
port {
number = local.hugo.service.port
}
}
}
path_type = local.hugo.ingress.rule.http.path.path_type
}
}
}
}
depends_on = [kubernetes_service.hugo]
}

7. Next open the file called locals.tf. This file defines the values for all the local variables that we saw in main.tf. It contains definitions for the name of the Hugo pod, the image to be used, the port that the Hugo service will listen on for incoming connects, and the port it will use to connect to the Hugo pods. It also defines the ingress configuration items.

locals {
hugo = {
name = "hugo"
image_name = "nivleshc/hugo:nivleshcwordpress_0.1"
replica_count = 2
service = {
port = 80
target_port = 80
session_affinity = "ClientIP"
type = "NodePort"
}
ingress = {
annotations = {
scheme = "internet-facing"
target_type = "instance"
}
class_name = "alb"
rule = {
http = {
path = {
path = "/"
path_type = "Prefix"
}
}
}
}
}
}

8. Next, open the file called variables.tf. This contains a declaration for all the non-local variables that will be used in this Terraform project. You don’t need to configure any of these, since their valuesare automatically injected into your Terraform project via the Application Pipeline’s AWS CodeBuild Project.

variable "env" {
description = "The environment name for this deployment"
}
variable "project_name" {
description = "The name of the project"
}
variable "region" {
description = "The AWS region where all resources will be deployed"
default = "ap-southeast-2"
}
variable "s3_bucket_name" {
description = "The name of the Amazon S3 bucket where the terraform state files are stored."
}
variable "s3_bucket_key_prefix" {
description = "The folder name inside which the terraform state files are stored."
}
variable "s3_bucket_key" {
description = "The name of the terraform state file"
default = "terraform.tfstate"
}
variable "dynamodb_lock_table_name" {
description = "The name of the DynamoDB table name that is used to manage concurrent access to terraform state files."
}

9. You might be wondering, where would I use the above variables? Well, in order to deploy inside the Amazon EKS cluster, you need to get a reference to it. This is tricky because the cluster is provisioned as part of the Infrastructure pipeline. The good news is that the statefile for this is stored in an Amazon S3 bucket, and we can use these variable values to access it and retrieve the Amazon EKS cluster details. This is a nice segway into the next file. Open _provider.tf.

The first resource block in this file defines the versions of the Terraform providers that can be used with this Terraform project. It also defines the default tags that will be added to all the resources. Notice the “<myenv>” and “<myname>” placeholders. We will update these during the deployment section.

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "~>2.12"
}
}
}
provider "aws" {
region = "ap-southeast-2"
default_tags {
tags = {
Environment = "<myenv>"
Owner = "<myname>"
}
}
}

The next resource block is used to access the infrastructure state file via a terraform_remote_state resource. Pay attention to the variables that are used in this resource block, they are defined in the variables.tf file that we saw in the last step.

data "terraform_remote_state" "infra" {
backend = "s3"
config = {
bucket = var.s3_bucket_name
key = var.s3_bucket_key
region = var.region
encrypt = true
dynamodb_table = var.dynamodb_lock_table_name
workspace_key_prefix = "${var.s3_bucket_key_prefix}/${var.project_name}_infra"
}
workspace = var.env
}

The last two resource blocks are used to get a reference to the Amazon EKS cluster. A Terraform Kubernetes provider is used for this. This provider will be used to provision the Hugo resources inside the Amazon EKS cluster.

data "aws_eks_cluster_auth" "eks_cluster" {
name = data.terraform_remote_state.infra.outputs.eks_cluster_name
}
provider "kubernetes" {
host = data.terraform_remote_state.infra.outputs.eks_endpoint
cluster_ca_certificate = base64decode((data.terraform_remote_state.infra.outputs.eks_certificate_authority))
token = data.aws_eks_cluster_auth.eks_cluster.token
}

10. Lets open the last file, outputs.tf and have a look inside. This file contains an output resource block, which will display the http ingress hostname for the Hugo service. This hostname can be used to access the Hugo service from outside the Amazon EKS cluster, using the ingress rule. It is as easy as pasting the hostname in your browser and viola! You will be able to access the Hugo service.

output "hugo_service_ingress_http_hostname" {
description = "Hugo service ingress hostname"
value = kubernetes_ingress_v1.hugo.status[0].load_balancer[0].ingress[0].hostname
}

Deploying the solution

Before continuing, ensure that you have the Serverless Terraform Pipeline, Application Pipeline and the infrastructure (Amazon EKS cluster) deployed.

If any of the above are missing, use the following links to deploy them.

With the prerequisites in place

  1. Clone the AWS CodeCommit application repository that was created as part of the Serverless Terraform pipeline using the following command
git clone <application repository https clone url>

2. Copy all the files from the blog-tf-pipeline-deploy-hugo-container folder (this contains the files from this blog’s GitHub repository) into the folder where you cloned the application AWS CodeBuild repository (step 1 above). Note – copy only the contents of the folder, not the folder itself.

3. From the folder containing the application repository files, open the file called _provider.tf.

4. Within the provider “aws” block, locate the default_tags section and update the values for Environment and Owner tags. Their values must match those that were used when the Serverless Terraform Pipeline was created.

5. Stage all the files using Git. Create a commit and push the changes to the application CodeCommit repository.

6. Within a few minutes of pushing the commit, the Amazon CloudWatch rule will detect it and it will trigger the Application Serverless Terraform pipeline (Application AWS CodePipeline pipeline).

7. The Application AWS CodePipeline pipeline will retrieve the artifacts from the Application AWS CodeCommit repository, store it in the Amazon S3 bucket and then proceed to the next stage, where it will create a terraform plan for the changes.

8. It will then send an email to the Application Change Approver, asking him/her to either approve or reject the changes.

9. Once the changes have been approved, the Application AWS CodePipeline pipeline will move to the Terraform Apply stage, where it will deploy the Hugo resources inside the Kubernetes cluster.

10. After the deployment has successfully finished, you can use the value of the output for hugo_service_ingress_http_hostname, as shown in the AWS CodeBuild project output logs, to access the Hugo container via your browser.

Cleaning up

After you have finished with the solution described in this blog, it is important to destroy all the provisioned resources. Otherwise you will be unnecessarily charged.

Cleanup is extremely easy, thanks to the code inside the admin folder.

Lets first update the Makefile, to ensure it that it can access the Terraform project’s state files.

  1. Open the Makefile located inside the admin folder. Update the following variables using the same values that were used when deploying the solution.
    • <myenv> – change this to the environment name that was used when deploying the solution.
    • <myprojectname> – change this to the project name that was used when deploying the solution.
    • <mys3bucketname> – change this to the Amazon S3 bucket name that was used when deploying the solution.
  2. Next, we need to destroy the resources in the order listed below. Otherwise, you will be left with orphaned resources, which will require a manual cleanup either via the AWS Management Console or AWS CLI. It will be messy, so I highly recommend that you follow the order listed below. The following commands have to run using the CLI from within the admin folder.
    • first, destroy all the resources provisioned via the application pipeline by using the steps listed below.
      • make terraform_app_init
      • make terraform_app_show
      • make terraform_app_destroy
    • next destroy all the resources provisioned via the infrastructure pipeline by using the steps listed below.
      • make terraform_infra_init
      • make terraform_infra_show
      • make terraform_infra_destroy
    • now, we need to destroy all the Serverless Terraform pipeline resources. This is done by using the steps listed below.
      • make terraform_pipeline_init
      • make terraform_pipeline_show
      • make terraform_pipeline_destroy
    • lastly, destroy all the prerequisite resources using the following steps.
      • make terraform_prereq_init
      • make terraform_prereq_show
      • make terraform_prereq_destroy

This brings us to the end of this blog. I hope you enjoyed following through as we deployed Hugo into our Amazon EKS cluster.

Isn’t Kubernetes awesome? With a few commands, you can have an application up and running, and be confident that the service will remain available, even if any of the pods become unhealthy or are terminated.

Till the next time, stay safe!