A scenario-based tutorial for Azure Kubernetes Service – Part 2

Introduction

In this blog, we will dig a little deeper into Azure Kubernetes Service (AKS). What better way to do this than by building an AKS cluster ourselves! Just a heads-up, I will be using terminology that was introduced in part 1 of this mini-blog series. If you haven’t read it, or need a refresher, you can access it at https://nivleshc.wordpress.com/2019/03/04/a-scenario-based-tutorial-for-azure-kubernetes-service-part-1

Let’s start by describing the AKS cluster architecture. The diagram below provides a great overview.

(Image copied from https://docs.microsoft.com/en-au/azure/aks/media/concepts-clusters-workloads/cluster-master-and-nodes.png)

The AKS Cluster is made up of two components. These are described below

  • cluster master node is an Azure managed service, which takes care of the Kubernetes service and ensures all the application workloads are properly running.
  • node is where the application workloads run.

The cluster master node is comprised of the following components

  • kube-apiserver – this api server provides a way to interface with the underlying Kubernetes API. Management tools such as kubectl or Kubernetes dashboard interact with this to manage the Kubernetes cluster.
  • etcd – this provides a key value store within Kubernetes, and is used for maintaining state of the Kubernetes cluster and state
  • kube-scheduler – the role of this component is to decide which nodes the newly created or scaled up application workloads can run on, and then it starts these workloads on them.
  • kube-controller-manager – the controller manager looks after several smaller controllers that perform actions such as replicating pods and handing node operations

The node is comprised of the following

  • kubelet – this is an agent that handles the orchestration requests from the cluster master node and also takes care of scheduling the running of the requested containers
  • kube-proxy – this component provides networking services on each node. It takes care of routing network traffic and managing IP addresses for services and pods
  • container runtime – this allows the container application workloads to run and interact with other resources within the node.

For more information about the above, please refer to https://docs.microsoft.com/en-au/azure/aks/concepts-clusters-workloads

Now that you have a good understanding of the Kubernetes architecture, lets move on to the preparation stage, after which we will deploy our AKS cluster.

Preparation

AKS subnet size

AKS uses a subnet to host nodes, pods, and any other Kubernetes and Azure resources that are created for the AKS cluster. As such, it is extremely important that the subnet is appropriately sized, to ensure it can accommodate the resources that will be initially created, and still have enough room for any future updates.

There are two networking methods available when deploying an Azure Kubernetes Service cluster

  • Kubenet
  • Azure Container Networking Interface (CNI)

AKS uses kubnet by default, and in doing so, it automatically creates a virtual network and subnets that are required to host the pods in. This is a great solution if you are learning about AKS, however if you need more control, it is better to go with Azure CNI. With Azure CNI, you get the option to use an existing virtual network and subnet or you can create a custom one. This is a much better option, especially when deploying into a production environment.

In this blog, we will use Azure CNI.

The formula below provides a good estimate on how large your subnet must be, in order to accommodate your AKS resources.

Subnet size = (number of nodes + 1) + ((number of nodes + 1) * maximum number of pods per node that you configure)

When using Azure CNI, by default each node is setup to run 30 pods. If you need to change this limit, you will have to deploy your AKS cluster using Azure CLI or Azure Resource Manager templates.

Just as an example, for a default AKS cluster deployment, using Azure CNI with 4 nodes, the subnet size at a minimum must be

IPs required = (4 + 1) + ((4+ 1) * (30 pods per node)) = 5 + (5 * 30) = 155

This means that the subnet must be at least a /24.

For this blog, create a new resource group called myAKS-resourcegroup. Within this new resource group, create a virtual network called AKSVNet with an address space of 10.1.0.0/16. Inside this virtual network, create a subnet called AKSSubnet1 with an address range of 10.1.3.0/24.

Deploying an Azure Kubernetes Service Cluster

Let’s proceed on to deploying our AKS cluster.

  1. Login to your Azure Portal and add Kubernetes Service
  2. Once you click on Create, you will be presented with a screen to enter your cluster’s configuration information
  3. Under Basics
  • Choose the subscription into which you want to deploy the AKS cluster
  • Choose the resource group into which you want to deploy the AKS cluster. One thing to point out here is that the cluster master node will be deployed in this resource group, however a new resource group with a name matching the naming format MC_<AKS master node resource group name>_<AKS cluster name>_region will be created to host the nodes where the containers will run (if you use the values specified in this blog, your node resource group will be named MC_myAKS-resourcegroup_mydemoAKS01_australiaeast)
  • Provide the Kubernetes cluster name (for this blog, let’s call this mydemoAKS01)
  • Choose the region you want to deploy the AKS cluster in (for this blog, we are deploying in australiaeast region)
  • Choose the Kubernetes version you want to deploy (you can choose the latest version, unless there is a reason to choose a specific version)
  • DNS name prefix – for simplicity, you can set this to the same as the cluster name
  • Choose the Node size. (for this blog, lets choose D2s v3 (2 vcpu, 8 GB memory)
  • Set the Node count to 1 (the Node count specifies the number of nodes that will be initially created for the AKS cluster)
  • Leave the virtual nodes to disabled

Under Authentication

  • Leave the default option to create a service principal (you can also provide an existing service principal, however for this blog, we will let the provisioning process create a new one for us)
  • RBAC allows you to control who can view the Kubernetes configuration (kubeconfig) information and to limit the permissions that they have. For now, leave RBAC turned off

Under Networking

  • Leave HTTP application routing set to No
  • As previously mentioned, by default AKS uses kubenet for networking. However, we will use Azure CNI. Change the Network configuration from Basic to Advanced
  • Choose the virtual network and subnet that was created as per the prerequisites (AKSVNet and AKSSubnet1)
  • Kubernetes uses a separate address range to allocate IP addresses to internal services within the cluster. This is referred to as Kubernetes service address range. This range must NOT be within the virtual network range and must not be used anywhere else. For our purposes we will use the range 10.2.4.0/24. Technically, it is possible to use IP addresses for the Kubernetes service address range from within the cluster virtual network, however this is not recommended due to potential of IP address overlaps which could potentially cause unpredictable behaviour. To read more about this, you can refer to https://docs.microsoft.com/en-au/azure/aks/configure-azure-cni.
  • Leave the Kubernetes DNS service IP address as the default 10.2.4.10 (the default is set to the tenth IP address within the Kubernetes service address range)
  • Leave the Docker bridge address as the default 172.17.0.1/16. The Docker Bridge lets AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster, and shouldn’t overlap with other address ranges in use on your network

Under Monitoring

  • Leave enable container monitoring set to Yes
  • Provide an existing Log Analytics workspace or create a new one

Under Tags

  • Create any tags that need to be attached to this AKS cluster
     4.  Click on Next: Review + create to get the settings validated.
After validation has successfully passed, click on Create.
Just be aware that it can take anywhere from 10 – 15 minutes to complete the AKS cluster provisioning.

While you are waiting

During the AKS cluster provisioning process, there are a number of things that are happening under the hood. I managed to track down some of them and have listed them below.

  • Within the resource group that you specified for the AKS cluster to be deployed in, you will now see a new AKS cluster with the name mydemoAKS01
  • If you open the virtual network that the AKS cluster has been configured to use and click on Connected devices, you will notice that a lot of IP addresses that have been already allocated.

    I have noticed that the number of IP addresses equals

    ((number of pods per node) + 1) * number of nodes

   FYI – for the AKS cluster that is being deployed in this blog, it is 31

  • A new resource group with the name complying to the naming format MC_<AKS master node resource group name>_<AKS cluster name>_region will be created. In our case it will be called MC_myAKS-resourcegroup_mydemoAKS01_australiaeast. This resource group will contain the virtual machine for the node (not the cluster master node), including all the resources that are needed for the virtual machines (availability set, disk, network card, network security group)

What will this cost me?

The cluster master node is a managed service and you are not charged for it. You only pay for the nodes on which the application workloads are run (these are those resources inside the new resource group that gets automatically created when you provision the AKS cluster).

In the next blog, we will delve deeper into the newly deployed AKS cluster, exposing its configuration using command line tools.

Happy sailing and till the next time, enjoy!

Using Ansible to deploy an AWS environment

Background

Over the past few weeks, I have been looking at various automation tools for AWS. One tool that seems to get a lot of limelight is Ansible, an open source automation tool from Red Hat. I decided to give it a go, and to my amazement, I was surprised at how easy it was to learn Ansible, and how powerful it can be.

All that one must do is to write up a list of tasks using YAML notation in a file (called a playbook) and get Ansible to execute it. Ansible reads the playbook and executes the tasks in the order that they are written. Here is the biggest advantage, there are no agents to be installed on the managed computers! Ansible connects to each of the managed computers using ssh or winrm.

Another nice feature of Ansible is that it supports third party modules. This allows Ansible to be extended to support many of the services that it natively does not understand.

In this blog, we will be focusing on one of the third-party modules, the AWS module. Using this, we will use Ansible to deploy an environment within AWS.

Scenario

For this blog, we will use Ansible to provision an AWS Virtual Private Cloud (VPC) in the North Virginia (us-east-1) region. Within this VPC, we will create a public and a private subnet. We will then deploy a jumphost in the public subnet and a server within the private subnet.

Below is a diagram depicting what will be done.

Figure 1: Environment that will be deployed within AWS using Ansible Playbook

Preparation

The computer that is used to run Ansible to manage all other computers is referred to as the control machine. Currently, Ansible can be run from any machine with Python 2 (version 2.7) or Python 3 (version 3.5 or higher) installed. The Ansible control machine can run the following operating systems

  • Red Hat
  • Debian
  • CentOS
  • macOS
  • any of the BSD variants

Note: Currently windows operating system is not supported for running the control machine.

For this blog, I am using a MacBook to act as the control machine.

Before we run Ansible, we need to get a few things done. Let’s go through them now.

  1. We will use pip (Python package manager) to install Ansible. If you do not already have pip installed, run the following command to install it
    sudo easy_install pip
  2. With pip installed, use the following command to install Ansible
    sudo pip install ansible

    For those that are not using macOS for their control machine, you can get the relevant installation commands from https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html.

  3. Next, we must install the AWS Command Line Interface (CLI) tools. Use the following command for this.
    sudo pip install awscli

    More information about the AWS CLI tools is available at https://aws.amazon.com/cli/

  4. To provision items within AWS, we need to provide Ansible with a user account that has the necessary permissions. Using the AWS console, create a user account ensuring it is assigned an access key and a secret access key. At a minimum, this account must have the following policies assigned to it.
    AmazonEC2FullAccess
    AmazonVPCFullAccess

    Note: As this is a privileged user account, please ensure that the access key and secret access key is kept in a safe place.

  5. To provision AWS Elastic Compute Cloud (EC2) instances, we require key pairs created in the region that the EC2 instances will be deployed in. Ensure that you already have key pairs for the North Virginia (us-east-1) region. If not, please create them.

Instructions

Create an Ansible Playbook

Use the following steps to create an Ansible playbook to provision an AWS environment.

Open your favourite YAML editor and paste the following code

The above code instructs Ansible that it should connect to the local computer, to run all the defined tasks. This means that Ansible modules will use the local computer to connect to AWS APIs in order to carry out the tasks.

Another thing to note is that we are declaring two variables. These will be used later in the playbook.

  • vpc_region – this is the AWS region where the AWS environment will be provisioned (currently set to us-east-1)
  • my_useast1_key – provide the name of your key pair for the us-east-1 region that will be used to provision EC2 instances

Next, we will define the tasks that Ansible must carry out. The format of the tasks is as follows

  • name – this gives a descriptive name for the task
  • module name – this is the module that Ansible will use to carry out the task
  • module Parameters – these are parameters passed to the module, to carry out the specific task
  • register – this is an optional keyword and is used to record the output that is returned from the module, after the task has been carried out.

Copy the following lines of code into your YAMl file.

The above code contains two tasks.

  • the first task creates an AWS Virtual Private Cloud (VPC) using the ec2_vpc_net module. The output of this module is recorded in the variable ansibleVPC using the register command
  • the second task outputs the contents of the variable ansibleVPC using the debug command (this displays the output of the previous task)

Side Note

  • Name of the VPC has been set to ansibleVPC
  • The CIDR block for the VPC has been set to 172.32.0.0/16
  • The state keyword controls what must be done to the VPC. In our case, we want it created and to exist, as such, the value for state has been set to present.
  • The region is being set by referencing the variable that was defined earlier. Variables are referenced with the notation “{{ variable name }}”

Copy the following code to create an AWS internet gateway and associate it with the newly created VPC. The second task in the below code displays the result of the internet gateway creation.

The next step is to create the public and private subnets. However, instead of hardcoding the availability zones into which these subnets will be deployed, we will pick the first availability zone in the region for our public and the second availability zone in the region for our private subnet. Copy the following code into your YAML file to show all the availability zones that are present in the region, and which ones will be used for the public and private subnets.

Copy the following code to create the public subnet in the first availability zone in us-east-1 region. Do note that we are provisioning our public subnet with CIDR range 172.32.1.0/24

Copy the following code to deploy the private subnet in the second availability zone in us-east-1 region. It will use the CIDR range 172.32.2.0/24

Hold on! To make a public subnet, it is not enough to just create a subnet. We need to create routes from that subnet to the internet gateway! The below code will address this. The private subnet does not need any such routes, it will use the default route table.

As planned, we will be deploying jumphosts within the public subnet. By default, you won’t be able to externally connect to the EC2 instances deployed within the public subnet because the default security group does not allow this.

To remediate this, we will create a new security group that will allow RDP access and assign it to the jumphost server. For simplicity, the security group will allow RDP access from anywhere, however please ensure that for your environment, you have locked it down to a few external IP addresses.

Phew! Finally, we are ready to deploy our jumphost! Copy the following code for this

I would like to point out a few things

  • The jumphost is running on a t2.micro instance. This instance type is usually sufficient for a jumphost in a lab environment, however if you need more performance, this can be changed (changing the instance type from t2.micro can take you over the AWS free tier limits and subsequently add to your monthly costs)
  • The image parameter refers to the AMI ID of the Windows 2016 base image that is currently available within the AWS console. AWS, from time to time, changes the images that are available. Please check within the AWS console to ensure that the AMI ID is valid before running the playbook
  • Instance tags are tags that are attached to the instance. In this case, the instance tags have been used to name the jumphost win2016jh.

Important Information

The following parameters are extremely important, if you do not intend on deploying a new EC2 instance for the same server every time you re-run this Ansible playbook.

exact_count – this parameter specifies the number of EC2 instances of a server that should be running whenever the Ansible playbook is run. If the current number of instances doesn’t match this number, Ansible either creates new EC2 instances for this server or terminates the extra EC2 instances. The servers are identified using the count_tag

count_tag – this is the instance tag that is used to identify a server. Multiple instances of the same server will have the same tag applied to them. This allows Ansible to easily count how many instances of a server are currently running.

Next, we will deploy the servers within the private subnet. Wait a minute! By default, the servers within the private subnet will be assigned the default security group. The default security group allows unrestricted access to all EC2 instances that have been attached to the default security group. However, since the jumphost is not part of this security group, it will not be able to connect to the servers in the private subnet!

Let’s remediate this issue by creating a new security group that will allow RDP access from the public subnet to the servers within the private subnet (in a real environment, this should be restricted further, so that the incoming connections are from particular servers within the public subnet, and not from the whole subnet itself). This new security group will be associated with the servers within the private subnet.

Copy the following code into your YAML file.

We are now at the end of the YAML file. Copy the code below to provision the windows 2016 server within the private subnet (the server will be tagged with name=win2016svr)

Save the playbook with a meaningful name. I named my playbook Ansible-create-AWS-environment.yml

The full Ansible playbook can be downloaded from https://gist.github.com/nivleshc/344dca91e3d0349c8a359b03853886be

Running the Ansible Playbook

Before we run the playbook, we need to tell Ansible about all the computers that are within the management scope. This is done using an inventory file, which contains a group name within square brackets eg [webservers] and below that, all the computers that will be in that group. Then in the playbook, we just target the group, which in turn targets all the computers in that group.

However, in our scenario, we are directly targeting the local computer (refer to the second line in the YAML file that shows hosts: localhost). In this regard, we can get away with not providing an inventory file. However, do note that doing so will mean that we can’t use anything other than localhost to reference a computer within our playbook.

Let’s create an inventory file called hosts in the same folder as where the playbook is saved. The contents of the file will be as listed below.

[local]
localhost

We are ready to run the playbook now.

Open a terminal session and change to the folder where the playbook was saved.

We need to create some environment variables to store the user details that Ansible will use to connect to AWS. This is where the access key and secret access key that we created initially will be used. Run the following command

export AWS_ACCESS_KEY_ID={access key id}
export AWS_SECRET_ACCESS_KEY={secret access key}

Now run the playbook using the following command (as previously mentioned, we could get away with not specifying the inventory file, however this means that we only can use localhost within the playbook)

ansible-playbook -i hosts ansible-create-aws-environment.yml

You should now see each of the tasks being executed, with the output being shown (remember that after each task, we have a follow-up task that shows the output using the debug keyword? )

Once the playbook execution has completed, check your AWS console to confirm that the following items have been created within the us-east-1 (North Virginia) region

  • A VPC called ansibleVPC with the CIDR 172.32.0.0/16
  • An internet gateway called ansibleVPC_igw
  • A public subnet in the first availability zone with CIDR 172.32.1.0/24
  • A private subnet in the second availability zone with CIDR 172.32.2.0/24
  • A route table called rt_ansibleVPC_PublicSubnet
  • A security group for jumphosts called sg_ansibleVPC_publicsubnet_jumphost
  • A security group for the servers in private subnet called sg_ansibleVPC_privatesubnet_servers
  • An EC2 instance in the public subnet representing a jumphost named win2016jh
  • An EC2 instance in the private subnet representing a server named win2016svr

Once the provisioning is complete, to test, connect to the jumphost and then from there connect to the server within the private subnet.

Don’t forget to turn off the EC2 instances if you don’t intend on using them

Closing Remarks

Ansible is a great automation tool and can be used to both provision and manage infrastructure within AWS.

Having said that, I couldn’t find an easy way to do post provisioning tasks (eg assigning roles, installing additional packages etc) after the server has been provisioned, without getting Ansible to connect directly to the provisioned server. This can be a challenge if the Ansible control machine is external to AWS and the provisioned server is within an AWS private subnet. With AWS CloudFormation, this is easily done. If anyone has any advice on this, I would appreciate it if you can leave it in the comments below.

I will surely be using Ansible for most of my automations from now on.

Till the next time, enjoy!

A scenario-based tutorial for Azure Kubernetes Service – Part 1

Introduction

Containers are gaining a lot of popularity these days. They provide an easy way to run applications, without having to worry about the underlying infrastructure.

As you might imagine, managing all these containers can become quite daunting, especially if there are numerous containers. This is where orchestration tools such as Kubernetes are very useful.

Kubernetes was developed by Google and is heavily based on their internal Borg system. It is an excellent tool to manage containers, where you provide a desired state for your containers and Kubernetes takes care of everything to ensure the containers are always in that state (for example, if a pod dies, Kubernetes will automatically start a new pod for that container, to ensure that the defined number of pods are always running). Kubernetes also provides an easy process to scale the number of pods or the number of nodes.

Soon after releasing Kubernetes, Google partnered with the Linux Foundation to form the Cloud Native Computing Foundation (CNCF). Kubernetes was then made open-source, with the Cloud Native Computing Foundation acting as its guardian. A nice writeup for Kubernetes history can be found at https://en.wikipedia.org/wiki/Kubernetes.

Kubernetes is abbreviated as k8s. If you are like me and are wondering how can the word Kubernetes be possibly shortened to k8s? Well, the 8 in k8s represents the number of characters between the letters k and s in the word Kubernetes.

With the popularity of Kubernetes soaring, Microsoft recently adopted it for its Azure environment, providing Azure Kubernetes Service as a managed service. The service entered general availability in June 2018. If you are interested in reading about this announcement, a good article to read is https://redmondmag.com/articles/2018/06/13/azure-kubernetes-service-ga.aspx .

This blog is the first in the mini-series that I will be publishing about Azure Kubernetes Service. I will take you through the process of creating an Azure Kubernetes Service (AKS) Cluster and then we will create an environment within the AKS cluster using some custom docker images.

In this first blog I will introduce some key Kubernetes terminologies and map out the scenario that the blog mini-series will focus on.

Terminology

Below are some of the key concepts which I believe will help immensely in understanding Kubernetes.

Pods

If you think about a pea pod, there can be one or many peas inside it. Treating each pea as a container, this translates to a pod being an encapsulation of an application container (or, in some cases, multiple containers).

As per the formal definition, a pod is an encapsulation of an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A pod represents a unit of deployment, a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources. A more detailed explanation is available at https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/.

One key point to remember is that pods are ephemeral, they are created and at times they die as well. In that regard, any application that directly accesses pods will eventually fail when the pod dies. Instead, you should always interact with Services, when trying to access containers deployed within Kubernetes.

Services

Due to the ephemeral nature of pods, any application that is directly accessing a pod will eventually suffer a downtime (when the pod dies, and another is created to replace it). To get around this, Kubernetes provides Services.

Think of a Service to be like an application load balancer, it provides a front end for your container, and then routes the traffic to a pod running that container. Since your applications are always connecting to a Service (the properties for the Service remain unchanged during its lifetime), they are shielded from any pod deaths. For information about services, refer to https://kubernetes.io/docs/concepts/services-networking/service/ .

Namespaces

Namespaces provide a logical way of grouping your Kubernetes cluster. This allows you to provide access to different resources to different sets of users. Namespaces also provide a scope for names. Names must be unique within a namespace however they do not need to be unique across namespaces. A more in-depth description can be found at https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/

Kubernetes Control Plane (master)

The Kubernetes master (this is a collection of processes) ensures the Kubernetes cluster is working as expected by maintaining the clusters desired state.

Kubernetes Nodes

The nodes are where the containers and workflows are run. The nodes can be virtual machines, physical machines etc. The Kubernetes master controls each node.

Scenario

The diagram below shows the environment we will be deploying within our Azure Kubernetes Service (AKS) cluster.

In summary, we will deploy three pods, each running a customised nginx container. The nginx containers will be listening on non-http/https ports. As Kubernetes does not natively provide a way to route non-http/https traffic to services, we will be deploying nginx ingress controllers to enable this functionality.

Figure 1 – Infrastructure that will be deployed within the Azure Kubernetes Service cluster

In the next blog in this mini-series, we will deploy the Azure Kubernetes Service cluster.

Happy sailing and see you soon!