Using AWS EC2 Instances to train a Convolutional Neural Network to identify Cows and Horses

Background

Machine Learning (ML) and Artificial Intelligence (AI) has been a hobby of mine for years now. After playing with it approximately 8 years back, I let it lapse till early this year, and boy oh boy, how things have matured! There are products in the market these days that use some form of ML – some examples are Apple’s Siri, Google Assistant, Amazon Alexa.

Computational power has increased to the point where calcuations that took months can now be done within days. However, the biggest change has come about due to the vast amounts of data that the models can be trained on. More data means better accuracy in models.

If you have taken any programming course, you would remember the hello world program. This is a foundation program, which introduces you to the language and gives you the confidence to continue on. The hello world for ML is identifying cats and dogs. Almost every online course I have taken, this is the first project that you build.

For anyone wanting a background on Machine Learning, I would highly recommend Andrew Ng’s https://www.coursera.org/learn/machine-learning in Coursera. However, be warned, it has a lot of maths 🙂 If you are able to get through it, you will get a very good foundational knowledge on ML.

If theory is not your cup of tea, another way to approach ML is to just implement it and learn as you go. You don’t need to get a PhD in ML to start implementing it. This is the philosophy behind Jeremy Howard’s and Rachel Thomas’s http://www.fast.ai. They take you through the implementation steps and introduce you to the theory on a need to know basis, in essence you are doing a top down approach.

I am still a few lessons away from finishing the fast.ai course however, I have learnt so much and I cannot recommend it enough.

In this blog, I will take you through the steps to implement a Convolutional Neural Network (CNN) that will be able to pick out horses from cows. CNNs are quite complicated in nature so we won’t go into the nitty-gritty details on creating them from scratch. Instead, we will use the foundational libraries from fast.ai’s lesson 1 and modify it abit, so that instead of identifying cats and dogs, we will use it to identify cows and horses.

In the process, I will introduce you to a tool that will help you scrape Google for your own image dataset.

Most important of all, I will show you how the amount of data used to train your CNN model affects its accuracy.

So, put your seatbelts on and lets get started!

 

1. Setting up the AWS EC2 Instance

ML requires a lot of processing power. To get really good throughput, it is recommended to use GPUs instead of CPUs. If you were to build a kit to try this at home, it can easily cost you a few thousands of dollars, not to mention the bill for the cooling and electricity usage.

However, with Cloud Computing, we don’t need to go out and buy the whole kit, instead we can just rent it for as long as we want. This provides a much affordable way to learn ML.

In this blog, we will be using AWS EC2 instances. For the cheapest GPU cores, we will use a p2.xlarge instance. Be warned, these cost $0.90/hr, so I would suggest turning them off after using them, otherwise you will surely rack up a huge bill.

Reshma has done a fantastic job of putting together the instructions on setting up an AWS Instance for running fast.ai course lessons. I will be using her instructions, with a few modifications. Reshma’s instructions can be found here.

Ok lets begin.

  • Login to your AWS Console
  • Go to the EC2 section
  • On the top left menu, you will see EC2 Dashboard. Click on Limits under it
  • AWS_Dashboard_EC2_Limits
  • Now, on the right you will see all the type of EC2 instances you are allowed to run. Search for p2.xlarge instances. These have a current limit of zero, meaning you cannot launch them. Click on Request limit increase and then fill out the form to justify why you want a p2.xlarge instance. Once done, click on Submit. In my case, within a few minutes, I received an email saying that my limit increase had been approved.
  • Click on EC2 Dashboard from the left menu
  • Click on Launch Instance
  • In the next screen, in the left hand side menu, click on Community AMIs
  • On the right side of the screen, search for fast.ai
  • From the results, select fastai-part1v2-p2
  • In the next screen (Instance Type) filter by GPU compute and choose p2.xlarge
  • In the next screen configure the instance details. Ensure you get a public IP address (Auto-assign Pubic IP) because you will be connecting to this instance over the internet. Once done, click Next: Add Storage
  • In the next screen, you don’t need to do anything. Just be aware that the community AMI comes with a 80GB harddisk (at $0.10/GB/Month, this will amount to $8/Month). Click Next
  • In the next screen, add any tags for the EC2 Instance. To give the instance a name, you can set the Key to Name and the Value to fastai. Click Next
  • For security groups, all you need to do is allow SSH to the instance. You can leave the source as 0.0.0.0/0 (this allows connections to the EC2 instance from any public IP address). However, if you want to be super secure, you can set the source to your current ip address. However, doing this means that should your public ip address change (hardly any ISPs give you a static IP address, unless you pay extra), you will have to go back into the AWS Console and update the source in the security group. Click Next
  • In the next section, check that all details are correct and then click on Launch. You will be asked for your key pair. You can either choose an existing key pair or create a new one. Ensure you keep the key pair in a safe place because whoever possesses it can connect to your EC2 instance.
  • Now, sit back and relax, Within a few minutes, your EC2 instance will be ready. You can monitor the progress in the EC2 Dashboard

DON’T FORGET TO SHUTDOWN THE INSTANCE WHEN NOT USING IT. AT $0.90/hr, IT MIGHT NOT SEEM MUCH, HOWEVER THE COST CAN EASILY ACCUMULATE TO SOMETHING QUITE EXPENSIVE

2. Creating the dataset

To train our Convolutional Neural Network (CNN), we need to get lots of images of cows and horses. This got me thinking. Why not get it off Google? But, then this provided another challenge. How do I download all the images? Surely I don’t want to be sitting there right clicking each search result and saving it!

After some googling, I landed on https://github.com/hardikvasa/google-images-download. It does exactly as to what I wanted. It will do a google image search using a keyword and download the results.

Install it using the instructions provided in the link above. By default, it only downloads 100 images. As CNNs need lots more, I would suggest installing chromedriver. The instructions to do this is in the Troubleshooting section under ## Installing the chromedriver (with Selenium)

To download 1000 images of cows and horses, use the following command line (for some reason the tool only downloads around 800 images)

  • the downloaded images will be stored in the subfolder cows/downloaded and horses/downloaded in the /Users/x/Documents/images folder.
  • keyword denotes what we are searching for in google. For cows, we will use cow because we want a single cow’s photo. The same for horses.
  • –chromedriver provides the path to where the chromedriver has been stored
  • the images will be in jpg format
googleimagesdownload --keywords "cow" --format jpg --output_directory "/Users/x/Documents/images/" --image_directory "cows/downloaded" --limit 1000 --chromedriver /Users/x/Documents/tools/chromedriver
googleimagesdownload --keywords "horse" --format jpg --output_directory "/Users/x/Documents/images/" --image_directory "horses/downloaded" --limit 1000 --chromedriver /Users/x/Documents/tools/chromedriver

3. Finding and Removing Corrupt Images

One disadvantage of using googleimagedownload script is that, at times a downloaded image cannot be opened. This will cause issues when our CNN tried to use it for training/validating.  To ensure our CNN does not encounter any issues, we will do some housekeeping before hand and remove all corrupt images (images that cannot be opened).

I wrote the following python script to find and move the corrupt images to a separate folder. The script uses the matplotlib library (the same library used by the fast.ai CNN framework) If you don’t have it, you will need to download it from https://matplotlib.org/users/installing.html.

The script assumes that within the root folder, there is a subfolder called downloaded which contains all the images. It also assumes there is a subfolder called corrupt within the root folder. This is where the corrupt images will be moved to. Set the root_folder_path to the parent folder of the folder where the images are stored.

#this script will go through the downloaded images and find those that cannot be opened. These will be moved to the corrupt folder.

#load libraries
import matplotlib.pyplot as plt
import os

#image folder
root_folder_path = '/Users/x/Documents/images/cows/'
image_folder_path = root_folder_path + 'downloaded/'
corrupt_folder_path = root_folder_path + 'corrupt' #folder were the corrupt images will be moved to

#get a list of all files in the img folder
image_files = os.listdir(f'{image_folder_path}')

print (f'Total Image Files Found: {len(image_files)}')
num_image_moved = 0

#lets go through each image file and see if we can read it
for imageFile in image_files:
 filePath = image_folder_path + imageFile
 #print(f'Reading {filePath}')
 try:
 valid_img = plt.imread(f'{filePath}')
 except:
 print (f'Error reading {filePath}. File will be moved to corrupt folder')
 os.rename(filePath,os.path.join(corrupt_folder_path,imageFile))
 num_image_moved += 1

print (f'Moved {num_image_moved} images to corrupt folder')

For some unknown reason, the script, at times, moves good images into the corrupt folder as well. I would suggest that you go through the corrupt images and see if you can open them (there won’t be many in the corrupt folder). If you can, just manually move them back into the downloaded folder.

To make the images easier to handle, lets rename them using the following format.

  • For the images in the cows/downloaded folder rename them to a format CowXXX.jpg where XXX is a number starting from 1
  • For the images in the horses/downloaded folder rename them to a format HorseXXX.jpg where XXX is a number starting from 1

 

4. Transferring the images to the AWS EC2 Instance

In the following sections, I am using ssh and scp which come builtin with MacOS. For Windows, you can use putty for ssh and WinSCP for scp

A CNN (or any other Neural Network model) is trained using a set of images. Once training has finished, to find how accurate the model is, we give it a set of validation images (these are different to those it was trained on, however we know what these images are of) and ask it to identify the images. We then compare the results with what the actual image was, to find the accuracy.

 

In this blog, we will first train our CNN on a small set of images.

Do the following

  • create a subfolder inside the cows folder and name it train
  • create a subfolder inside the cows folder and name it valid
  • move 100 images from the cows/downloaded folder into the cows/train folder
  • move 20 images from the cows/downloaded folder into the cows/valid folder

Make sure the images in the cows/train folder are not the same as those in cows/valid folder

Do the same for the horses images, so basically

  • create a subfolder inside the horses folder and name it train
  • create a subfolder inside the horses folder and name it valid
  • move 100 images from the horses/downloaded folder into the horses/train folder
  • move 20 images from the horses/downloaded folder into the horses/valid folder

Now connect to the AWS EC2 instance the following command line

ssh -i key.pem ubuntu@public-ip

where

  • key.pem is the key pair that was used to create the AWS EC2 instance (if the key pair is not in the current folder then provide the full path to it)
  • public-ip is the public ip address for your AWS EC2 instance (this can be obtained from the EC2 Dashboard)

Once connected, use the following commands to create the required folders

cd data
mkdir cowshorses
mkdir cowhorses/train
mkdir cowhorses/valid
mkdir cowhorses/train/cows
mkdir cowhorses/train/horses
mkdir cowhorses/valid/cows
mkdir cowhorses/valid/horses

Close your ssh session by typing exit

Run the following commands to transfer the images from your local computer to the AWS EC2 instance

To transfer the cows training set
scp -i key.pem /Users/x/Documents/images/cows/train/*  ubuntu@public-ip::~/data/cowshorses/train/cows

To transfer the horses training set
scp -i key.pem /Users/x/Documents/images/horses/train/*  ubuntu@public-ip::~/data/cowshorses/train/horses

To transfer the cows validation set
scp -i key.pem /Users/x/Documents/images/cows/valid/*  ubuntu@public-ip::~/data/cowshorses/valid/cows

To transfer the horses validation set
scp -i key.pem /Users/x/Documents/images/horses/valid/*  ubuntu@public-ip::~/data/cowshorses/valid/horses

5. Starting the Jupyter Notebook

Jupyter Notebooks are one of the most popular tools used by ML and data scientists. For those that aren’t familiar with Jupyter Notebooks, in a nutshell, it a web page that contains descriptions and interactive code. The user can run the code live from within the document. This is possible because Jupyter Notebook’s execute the code on the server it is running on and then displays the result in the web page. For more information, you can check out http://jupyter.org

In our case, we will be running the Jupyter Notebook on the AWS EC2 instance. However, we will be accessing it through our local computer. For security reasons, we will not publish our Jupyter Notebook to the whole wide world (lol that does spell www).

Instead, we will use the following ssh command to bind our local computer’s tcp port 8888 to the AWS EC2 instance’s tcp port 8888 (this is the port on which the Jupyter Notebook will be running) when we connect to it. This will allow us to access the Jupyter Notebook as if it is running locally on our computer, however the connection will be tunnelled to the AWS EC2 instance.

ssh  -i key.pem ubuntu@public-ip -L8888:localhost:8888

Next, run the following commands to start an instance of Jupyter Notebook

cd fastai
jupyter notebook

After the Jupyter Notebook starts, it will provide a URL to access it, along with the token to authenticate with. Copy it and then paste it into a browser on your local computer.

You will now be able to access the fastai Jupyter Notebook.

Follow the steps below to open Lesson 1.

  • click on the courses folder
  • once inside the courses folder,  click on the  dl1 folder

In the next screen, find the file lesson1.ipynb and double-click it. This will launch the lesson1 Jupyter Notebook in another tab.

Give yourself a big round of applause for reaching so far!

Now, start from the top of lesson1 and go through the first three code sections and execute them. To execute the code, put the mouse pointer in the code section and then press Shift+Enter.

In the next section, change the path to where we moved the cows and horses pictures to. It should look like below

PATH = "data/cowshorses/"

Then, execute this code section.

Skip the following sections

  • Extra steps if NOT using Crestle or Paperspace or our scripts
  • Extra steps if using Crestle

Just a word of caution. The original Jupyter Notebook is meant to distinguish between cats and dogs. However, since we are using it to distinguish between cows and horses, whenever you see a mention of cats, change it to cows and whenever you see a mention of dogs, change it to horses.

The following lines don’t need any changing, so just execute them as they are

os.listdir(PATH)
os.listdir(f'{PATH}valid')

In the next line, replace cats with cows so that you end up with the following

files = !ls {PATH}valid/cows | head
files

Execute the above code. A list of the first 10 cow image files will be displayed.

Next, lets see what the first cow image looks like.

In the next line, change cats to cows to get the following.

img = plt.imread(f'{PATH}valid/cows/{files[0]}')
plt.imshow(img);

Execute the code and you will see the cow image displayed.

Execute the next two code sections. Leave the section after that commented out.

Now, instead of creating a CNN model from scratch, we will use one that was pre-trained on ImageNet which had 1.2 million images and 1000 classes. So it already knows quite a lot about how to distinguish objects. To make it suitable to what we want to do, we will now train it further on our images of cows and horses.

The following defines which model to use and provides the data to train on (the CNN model that we will be using is called resnet34). Execute the below code section.

data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(resnet34, sz))
learn = ConvLearner.pretrained(resnet34, data, precompute=True)

And now for the best part! Lets train the model and give it a learning rate of 0.01.

learn.fit(0.01, 1)

After you execute the above code, the model will be trained on the cows and horses images that were provided in the train folders. The model will then be tested for accuracy by getting it to identify the images contained in the valid folders. Since we already know what the images are of, we can use this to calculate the model’s accuracy.

When I ran the above code, I got an accuracy of 0.75. This is quite good since it means the model can identify cows from horses 75% of the time. Not to forget, we used only 100 cows and 100 horses images to train it, and it didn’t even take that long to train it !

Now, lets see what happens when we give it loads more images to train on.

BTW to get more insights into the results from the trained model,  you can go through all the sections between the lines learning.fit(0.01,1) and Choosing a learning rate.

Another take at training the model

From all the literature I have been reading, one point keeps on repeating. More data means better models. Lets put this to the test.

This time around we will give the model ALL the images we downloaded.

Do the following.

  • on your local computer, move the photos back to the downloaded folder
    • move photos from cows/train to cows/downloaded
    • move photos from cows/valid to cows/downloaded
    • move photos from horses/train to horses/downloaded
    • move photos from horses/valid to horses/downloaded
  • on your local computer, move 100 photos of cows to cows/valid folder and the rest to the cows/train folder
    • move 100 photos from cows/downloaded to cows/valid folder
    • move the rest of the photos from cows/downloaded to cows/train folder
  • on your local computer, move 100 photos for horses to horses/valid and the rest to horses/train folder
    • move 100 photos from horses/downloaded to horses/valid folder
    • move the rest of the photos from horses/downloaded to horses/train folder
  • on the AWS EC2 instance, delete all the photos under the following folders
    • /data/cowshorses/train/cows
    • /data/cowshorses/train/horses
    • /data/cowshorses/valid/cows
    • /data/cowshorses/valid/horses

Use the following commands to copy the images from the local computer to the AWS EC2 Instance

To transfer the cows training set
scp -i key.pem /Users/x/Documents/images/cows/train/*  ubuntu@public-ip::~/data/cowshorses/train/cows

To transfer the horses training set
scp -i key.pem /Users/x/Documents/images/horses/train/*  ubuntu@public-ip::~/data/cowshorses/train/horses

To transfer the cows validation set
scp -i key.pem /Users/x/Documents/images/cows/valid/*  ubuntu@public-ip::~/data/cowshorses/valid/cows

To transfer the horses validation set
scp -i key.pem /Users/x/Documents/images/horses/valid/*  ubuntu@public-ip::~/data/cowshorses/valid/horses

Now that everything has been prepared, re-run the Jupyter Notebook, as stated under Starting Jupyter Notebook above (ensure you start from the top of the Notebook).

When I trained the model on ALL the images (less those in the valid folder) I got an accuracy of 0.95 ! Wow that is soo amazing! I didn’t do anything other than increase the amount of images in the training set.

Final thoughts

In a future blog post, I will show you how you can use the trained model to identify cows and horses from a unlabelled set of photos.

For now, I would highly recommend that you use the above mentioned image downloader to scrape Google for some other datasets. Then use the above instructions to train the model on those images and see what kind of accuracy you can achieve (maybe try identifying chickens and ducks?)

As mentioned before, once finished, don’t forget to shut down your AWS EC2 instance. If you don’t need it anymore, you can terminate it, to save on storage costs as well.

If you are keen about ML, you can check out the courses at http://www.fast.ai (they are free)

If you want to dabble in the maths behind ML, as perviously mentioned, Andrew Ng’s https://www.coursera.org/learn/machine-learning is one of the finest.

Lastly, if you are keen to take on some ML challenges, check out https://www.kaggle.com They have lots and lots competitions running all the time, some of which pay out actual money. There are lots of resources as well and you can learn off others on the site.

Till the next time, Enjoy 😉

Advertisements

Deploying an Active Directory Forest using AWS CloudFormation

Introduction

Wow, it is amazing how time flies. Almost two years ago, I wrote a set of blogs that showed how one can use Azure Resource Manager (ARM) templates and Desired State Configuration (DSC) scripts to deploy an Active Directory Forest automatically.

For those that would like to take a trip down memory lane, here is the link to the blog.

Recently, I have been playing with AWS CloudFormation and I am simply in awe by its power. For those that are not familiar with AWS CloudFormation, it is a tool, similar to Azure Resource Manager, that allows you to “code” your computing infrastructure in Amazon Web Services. Long gone are the days when you would have to sit down, pressing each button and choosing each option to deploy your environment. Cloud computing provides you with a way to interface with the fabric, so that you can script the build of your environment. The benefits of this are enormous. Firstly, it allows you to standardise all your builds. Secondly, it allows you to have a live as-built document (the code is the as-built document). Thirdly, the code is re-useable. Most important of all, since the deployment is now scripted, you can automate it.

In this blog I will show you how to create an AWS CloudFormation template to deploy an AWS Elastic Compute Cloud (EC2) Windows Server instance. The template will also include steps to promote the EC2 instance to a Domain Controller in a new Active Directory Forest.

Guess what the best part is? Once the template has been created, all you will have to do is to load it into AWS CloudFormation, provide a few values and sit back and relax. AWS CloudFormation will do everything for you from there on!

Sounds interesting? Lets begin.

Creating the CloudFormation Template

A CloudFormation template starts with a definition of the parameters that will be used. The person running the template (lets refer to them as an operator) will be asked to provide the values for these parameters.

When defining a parameter, you will provide the following

  • a name for the parameter
  • its type
  • a brief description for the parameter so that the operator knows what it will be used for
  • any constraints you want to put on the parameter, for instance
    • a maximum length (for strings)
    • a list of allowed values (in this case a drop down list is presented to the operator, to choose from)
  • a default value for the parameter

For our template, we will use the following parameters.

Next, we will define some mappings. Mappings allow us to define the values for variables, based on what value was provided for a parameter.

When creating EC2 instances, we need to provide a value for the Amazon Machine Image (AMI) to be used. In our case, we will use the OS version to decide which AMI to use.

To find the subnet into which the EC2 instance will be deployed in, we will use the Environment and AvailabilityZone parameters to find it.

The code below defines the mappings we will use

The next section in the CloudFormation template is Resources. This defines all resources that will be created.

If you have any experience deploying Active Directory Forests, you will know that it is extremely simple to do it using PowerShell scripts. Guess what, we will be using PowerShell scripts as well 😉 Now, after the EC2 instance has been created, we need to provide the PowerShell scripts to it, so that it can run them. We will use AWS Simple Storage Service (S3) buckets to store our PowerShell scripts.

To ensure our PowerShell scripts are stored securely, we will allow access to it only via a certain role and policy.

The code below will create an AWS Identity and Access Management (IAM) role and policy to access the S3 Bucket where the PowerShell scripts are stored.

We will use cf-init to do all the heavy lifting for us, once the EC2 instance has been created. cf-init is a utility that is present by default in EC2 instances and we can ask it to perform tasks for us.

To trigger cf-init, we will use the Userdata feature of EC2 instance provisioning. cf-init, when started, will check the EC2 Metadata for the credentials it will use, and it will also check it for all the tasks it needs to perform.

Below is the metadata that will be used. For simplicity, I have hardcoded the URL to the files in the S3 bucket.

As you can see, I have first defined the role that cf-init will use to access the S3 bucket. Next, the following tasks will be carried out, in the order defined in the configuration set

  • get-files
    • it will download the files from S3 and place them in the local directory c:\s3-downloads\scripts.
  • configure-instance (the commands in this section are run in alphabetical order, that is why I have prefixed them with a number, to ensure it follows the order I want)
    • It will change the execution policy for PowerShell to unrestricted (please note that this is just for demonstration purposes and the execution policy should not be made this relaxed).
    • next, the name of the server will be changed to what was provided in the Parameters section
    • the following Windows Components will be installed (as defined in the Add-WindowsComponents.ps1 script file)
      • RSAT-AD-PowerShell
      • AD-Domain-Services
      • DNS
      • GPMC
    • the Active Directory Forest will be created, using the Configure-ADForest.ps1 script and the values provided in the Parameters section

In the last part of the CloudFormation template, we will provide the UserData information that will trigger cfn-init to run and do all the configuration. We will also tag the the EC2 instance, based on values from the Parameters section.

For simplicity, I have hardcoded the security group that will be attached to the EC2 instance (this is defined as GroupSet under NetworkInterfaces). You can easily create an additional parameter for this, if you want.

Finally, our template will output the instance’s hostname, environment it has been created in and its privateip. This provides an easy way to identify the EC2 instance once it has been created.

Below is the last part of the template

Now all you have to do is login to AWS CloudFormation, load the template we have created, provide the parameter values and sit back and relax.

AWS CloudFormation will take it from here and do everything for you 😉

How easy was that? Magic 🙂

The complete CloudFormation template is available at https://gist.github.com/nivleshc/867b1a2ca119c7d22cf215b5a9a5de02

The two PowerShell Scripts that are used in the CloudFormation template can be downloaded using the links below

Add-WindowsComponents.ps1

Configure-ADForest.ps1

For anyone deploying an Active Directory Forest in AWS, I hope the above comes in handy.

Enjoy 😉

Amazon QuickSight – An elegant and easy to use business analytics tool

Introduction

Recently, I had a requirement for a tool to visualise some data I had collected. My requirements were very simple. I didn’t want something that would cost me a lot, and at the same time I wanted the reports to be elegant and informative. Most of all, I didn’t want to have to go through pages and pages of documentation to learn how to use it.

As my data was within Amazon Web Services (AWS), I thought to check if AWS had any such offerings. Guess what, there was indeed a tool just for what I wanted, and after using it, I was amazed at how simple and elegant it is.

In this blog, I will show how you can easily get started with Amazon QuickSight. I will take you through the steps to import your data into Amazon QuickSight and then create some informative visualisations.

Some background on Amazon QuickSight

Pricing

Amazon QuickSight is very inexpensive, infact, if your data is not too much, you won’t have to pay anything!

For standard edition use, Amazon QuickSight provides 1GB of SPICE for the first user free per month. SPICE is an acronym for Super-fast, Parallel, In-memory, Calculation Engine and it uses a combination of columnar storage, in-memory technologies enabled through the latest hardware innovations, machine code generation, and data compression to allow users to run interactive queries on large datasets and get rapid responses.  SPICE is the calculation engine that Amazon QuickSight uses.

Any additional SPICE is priced at $USD0.25 per GB/month. For the latest pricing, please refer to https://aws.amazon.com/quicksight/#Pricing

Data Sources

Currently Amazon QuickSight supports the following data sources

  • Relational Data Sources
    • Amazon Athena
    • Amazon Aurora
    • Amazon Redshift
    • Amazon Redshift Spectrum
    • Amazon S3
    • Amazon S3 Analytics
    • Apache Spark 2.0 or later
    • Microsoft SQL Server 2012 or later
    • MySQL 5.1 or later
    • PostgreSQL 9.3.1 or later
    • Presto 0.167 or later
    • Snowflake
    • Teradata 14.0 or later
  • File Data Sources
    • CSV/TSV – (comma separated, tab separated value text files)
    • ELF/CLF – Extended and common log format files
    • JSON – Flat or semi-structured data files
    • XLSX – Microsoft Excel files

Unfortunately, currently Amazon DynamoDB is not supported as a native data source. Since my data is in Amazon DynamoDB, I had to write some custom lambda functions to export it to a csv file, so that it could be imported into Amazon QuickSight.

Ok, time for that walk-through I promised earlier.  For this blog, I will be using an S3 bucket as my data source. It will contain the CSV files that I will use for analysis in Amazon QuickSight.

Step 1 – Create S3 buckets

If you haven’t already done so, create an S3 bucket that will contain the csv files. The S3 bucket does not have to be publicly accessible. Once created, upload the csv files into the S3 bucket.

In my case, the csv file is called orders.csv and its location is https://s3.amazonaws.com/sample/orders.csv (to get the URL to your S3 file, login to the S3 console and navigate to the S3 bucket that contains the file. Click the S3 bucket to open it, then click the file name to open its properties. Under Overview you will see Link. This is the URL to the file)

Step 2 – Create an Amazon QuickSight Account

Before you start using Amazon QuickSight, you must create an account. Unfortunately, I couldn’t find a way for creating an Amazon QuickSight account without creating an Amazon AWS account. If you don’t have an existing Amazon AWS account, you can create an AWS Free Tier account. Once you have got an AWS account, go ahead and create an Amazon QuickSight account at https://aws.amazon.com/quicksight/.

While creating your Amazon QuickSight account, you will be asked if you would like Amazon QuickSight to auto-discover your Amazon S3 buckets. Enable this and then click to Choose S3 buckets. Choose the S3 bucket that you created in Step 1 above. This will give Amazon QuickSight read-only access to the S3 bucket, so that it can read the data for analysis.

Step 3 – Create a manifest file

A manifest file is a JSON file that provides the location and format of the data files to Amazon QuickSight. This is required when creating a data set for S3 data sources. Please refer to https://docs.aws.amazon.com/quicksight/latest/user/supported-manifest-file-format.html if you would like more information about manifest files.

Below is my manifest file, which I have affectionately named ordersmanifest.json.

{
   "fileLocations": [
      {
         "URIs": [
            "https://s3.amazonaws.com/sample/orders.csv"
         ]
      },
   ],
   "globalUploadSettings": {
      "format": "CSV",
      "delimiter": ",",
      "textqualifier": "'",
      "containsHeader": "true"
   }
}

Once created, upload the manifest file into the same S3 bucket as to where the csv file is stored.

Step 4 – Create a data set

  • Login to your Amazon QuickSight account. From the top right, click on Manage data
  • In the next screen, click on New data set
  • In the next screen, for Create a Data Set FROM NEW DATA SOURCES, click on S3
  • In the next screen
    • provide a name for the data source
    • for Upload a manifest file ensure URL is clicked and enter the URL to the manifest file (you can get the url by logging into the S3 console, and then clicking on the manifest file to reveal its properties. Under the Overview tab, you will see Link. This is the URL to the manifest file).NewS3DataSource
    • Click Connect
    • Amazon QuickSight will now read the manifest file and then import the csv file to SPICE. You will see the following screenFinishDataSetCreation
    • Click on Edit/Preview data.
    • In the next screen, you will see the contents of the data file that was imported, along with the Fields name on the left. If you want to exclude any columns from the analysis, simply untick them (I unticked orderTime (S) since I didn’t need it) EditPreviewDataSet
    • By default, the data is called Group 1. To customise the name, replace Group 1 with a text of your choice (I have renamed my data to Orders Data)RenameGroup1Label
    • Click Save & visualize from the top menu

Step 5 – Create Visualisations

Now that you have imported the data into SPICE, you can start analysing it and creating visualisations.

After step 4, you should be in the Analysis section.

  • Depending on which visualisation you want, you can select the respective type under Visual types from the bottom left hand side of the screen. For my visualisations, I chose Pie Chart (side note – you will notice that orderTime (S)  isn’t listed under Fields list. This is because we had unticked it in the previous screen)OrdersDataAnalysis-01
  • I want to create two Pie Charts, one to show me analysis about what is the most popular foodName and another to find out what is the most popular drinkName. For the first Pie Chart, drag foodName (S) from the Fields list to the Value – Add a measure here box  in the top of the screen. Then drag foodName (S) from the Fields list to the Group/Color – Add a dimension here box in the top of the screen. You will see the followingOrdersDataAnalysis-02
  • You can customise the visualisation title Count of Foodname (S) by Foodname (S) by clicking it and then changing the text (I have changed the title to Popularity of Food Types)FoodNamePopularity
  • If you look closely, the legend on the right hand side doesn’t serve much purpose since the pie slices are already labelled quite well. You can also get rid of the legend and get more space for your visual. To do this, click on the down arrow above FoodName (S) on the right and then select Hide legend FoodNameHideLengend
  • Next, lets create a Pie Chart visualisation for drinkName. From the top menu, click on Add and then Add visual drinkNameAddVisual
  • You will now have another Canvas at the bottom of the first Pie Chart. Click this new canvas area to select it (a blue border will appear to show that it is selected). From Visual types at the bottom left hand side, click on the Pie Chart visual. Then from the top, click on Field wells to expose the Value and Group/Color boxes for the second canvas drinkNameCanvas
  • From the Field list on the left, drag drinkName (S) to the Value – Add a measure here box  in the top of the screen. Then drag drinkName (S) from the Fields list to the Group/Color – Add a dimension here box in the top of the screen. You will now see the following foodanddrinkvisual
  • We are almost done. I actually want the two Pie Charts to sit side by side, instead of one ontop ofthe other. To do this, I will show you a neat trick. In each of the visuals, at the bottom right border, you will see two diagonal lines. If you move your mouse pointer over them, they change to a resizing cursor. Use this to resize the visual’s canvas area. Also, in the middle of the top border of the visual, you will see two rows of gray dots. Click your mouse pointer on this and drag to the location you want to move the visual to.VisualResizeandMove
  • I have hidden the legend for the second visual, customised the title and resized both the visuals and moved them side by side. Viola! Below is what I get. Not bad aye!BothVisualsSidebySide

Step 6 – Create a dashboard

Now that the visuals have been created, they can be shared it with others. This can be done by creating a dashboard. A dashboard is a read-only snapshot of the analysis. When you share the dashboard with others, they can view and filter the dashboard data, however any filters applied to the dashboard visual exist only when the user is viewing the dashboard, and aren’t saved once it is closed.

One thing to note about sharing dashboards – you can only share dashboards with users who have an Amazon QuickSight account.

Creating a dashboard is very easy.

  • In the Analysis screen, on the top right corner, click on Share and then select Create dashboardCreateDashboard
  • You can either replace an existing dashboard or create a new one. In our case, since we are creating a new dashboard, select Create a new dashboard as and enter a name for the dashboard. Once finished, click Create dashboardCreateDashboard-Name
  • You will then be asked to enter the username or email address of those you want to share the dashboard with. Enter this and click on Share ShareDashboard
  • That’s it, your dashboard is now created. To access it, go to the Amazon QuickSight home screen (click on the Amazon QuickSight icon on the top left hand side of the screen) and then click on All dashboards. Those that you have shared the dashboard with will also be able to see it once they login to their Amazon QuickSight account.AllDashboards

Step 6 – Refreshing the Data Set

If your data set continually changes, your visualisations/dashboards will not show the updated information. This can be done by refreshing the data set. Doing this will import the new data into SPICE, which will then automatically update the analysis/visualisations and dashboards

Note: you will have to manually reload the webpage to see the updated visualisations and dashboard

There are two ways of refreshing data sets. One is to do it manually while the other is to use a schedule. The scheduled data refresh allows for the data to be automatically refreshed at a certain time daily, weekly or monthly. A maximum of five scheduled refreshes can be configured.

The steps below show how you can manually refresh the data or create schedules to refresh the data

  • From the Amazon QuickSight main screen, click on Manage data from the top left of the screen ManageData
  • In the next screen, you will see all your currently configured data sets. Click the Orders Data dataset (this is the one we had created previously).
  • In the next screen, you will see Refresh Now and Schedule refreshManualScheduleDataRefresh
  • Clicking on Refresh Now will manually refresh the data. Clicking on Schedule refresh will bring up the screen where you can configure a schedule for refreshing the data automatically.

 

That’s it folks! Wasn’t that simple? If you already have an Amazon AWS account, I would strongly recommend giving Amazon QuickSight a try for all your analytics needs. Even if you don’t have an Amazon AWS account, I would still suggest getting an AWS free tier account to try it out.

Enjoy 😉

 

My path to AWS Certified Solutions Architect – Associate

Almost a week and a half ago, I sat for and passed my AWS Certified Solutions Architect – Associate  (AWS CSAA) exam. To be quite honest, I felt the biggest sense of relief, after walking out of the exam centre, not because I had passed, but because I had finally forced myself to sit this exam.

Since posting on LinkedIn about passing my AWS CSAA exam, I have received a lot of requests from people, asking for tips on how to prepare for this exam. In addition to replying to them, I thought best to also put up a blog post, to help others that might be preparing to sit for the same exam.

I have been a Microsoft person for years, so it was quite natural for me to transition to Microsoft Azure a few years back. Microsoft Azure, to put it simply, is awesome. It empowers its users to grow their IT much faster and beyond what their local datacenter’s can accomodate, with a simple, pay-as-you-use model. This model has helped so many businesses become successful, in such a short amount of time.

I start my New Year, each year, with a list of technologies that I had been introduced to, in the year that had just finished, but did not get a chance to properly get acquainted with. This list, then condenses into my list of todos for that year. Last year, AWS made it into my list and since then I have been spending time, finding out all about it.

Exams to me are not just a certification, but a chance to find more about a technology and to re-confirm my understanding of it. That is why I try my best to learn as much as possible about the technologies being tested in an exam. What better way to learn more about AWS then to do the AWS Certified Solutions Architect – Associate exam 😉

I got my training material from https://acloud.guru Ryan Kroonenburg teaches an awesome course to get one prepared to sit for the exam. The course is not too expensive either, well within the budget of anyone.

However, a note of caution. If you are very new to AWS and think that just doing the course will be enough to pass the exam, then there is some news for you. The whole basis for the AWS exams is to test you on your hands on experience and just cramming the information and proceeding to sit for the exam will be preparing yourself for a fail. I would highly recommend doing the labs that Ryan takes you through in the course. AWS provides a free tier, which covers almost all that you will need to train for the AWS CSAA exam. You can make use of this to get lots of hands on training. Also, read the whitepapers and faqs. These provide detailed information about the AWS services. ACloud.guru also has forums where you can ask questions and also answer questions others have posted.

To recap, my tips for passing the AWS CSAA exam

  • purchase the ACloud.guru course for AWS CSAA
  • go through the videos in the ACloud.guru course at least twice and do all the labs
  • answer all the section quizzes, the Mega Quizzes and Final exam questions and do them till you get at least 90% correct
  • read the AWS whitepapers and FAQs
  • participate in the ACloud.guru forums

You will soon realise that you are confident enough to sit the exam and that is when you book it and go sit for it.

I would highly recommend not waiting too long to sit for your exam (I have learnt from my mistake). I find that if I wait for more than a month to book my exam, I normally start forgetting all that I have learnt, and have to go over the material all over again. Everyone is different, so this might not be true for you. I would suggest aiming for 1 month of training, however in the 2nd week, book your exam for a date 2 weeks away. Doing this will firstly put you in panic mode, but then you will soon realise that you don’t have much time to study, and will start studying more intensely. By the last week, if you still think you are not fully prepared, you have 72 hours to reschedule the exam. If you are going to reschedule, then don’t reschedule to a date too far in the future.

I wish you all the best for the exam.

Re-execute the UserData script in an AWS Windows Instance

Bootstrapping is an awesome way of customising your instances in AWS (similar capability exists in Azure).

To enable bootstrapping, while configuring the launch instance, in Step 3: Configure Instance Details scroll down to the bottom and then expand Advanced Details.

You will notice a User data text box. This is where you can provide your bootstrap script. The script will be run when your instance is first launched.

AWS_BootstrapScript

I went ahead and entered my script in the text box and proceeded to complete my instance configuration. Once my instance was running, I initiated a Remote Desktop connection to it, to confirm that my script had run. Unfortunately, I couldn’t see any customisations (which meant my script didn’t run)

Thinking that the instance had not been able to access the user data, I opened up Internet Explorer and then browsed to the following url (this is an internal url that can be used to access the user-data)

http://169.254.169.254/latest/user-data/

I was able to successfully access the user-data, which meant that there were no issues with that.  However when checking the content, I noticed a typo! Aha, that was the reason why my customisations didn’t happen.

Unfortunately, according to AWS, user-data is only executed during launch (for those that would like to read, here is the official AWS documentation). To get the fixed bootstrap script to run, I would have to terminate my instance and launch a new one with the corrected script (I tried re-booting my windows instance after correcting my typo, however it didn’t run).

I wasn’t very happy on terminating my current instance and then launching a new one, since for those that might not be aware, AWS EC2 compute charges are rounded up to the next hour. Which means that if I terminated my current instance and launched a new one, I would be charged for 2 x 1hour sessions instead of just 1 x 1 hour!

So I set about trying to find another solution. And guess what, I did find it 🙂

Reading through the volumes of documentation on AWS, I found that when Windows Instances are provisioned, the service that does the customisations using user-data is called EC2Config. This service runs the initial startup tasks when the instance is first started and then disables them. HOWEVER, there is a way to re-enable the startup tasks later on 🙂 Here is the document that gives more information on EC2Config.

The Amazon Windows AMIs include a utility called EC2ConfigService Settings. This allows you to configure EC2Config to execute the user-data on next service startup. The utility can be found under All Programs (or you can search for it).

AWS_EC2ConfigSettings_AllApps

AWS_EC2ConfigSettings_Search

Once Open, under General you will see the following option

Enable UserData execution for next service start (automatically enabled at Sysprep) eg. or <powershell></powershell>

AWS_EC2ConfigSettings

Tick this option and then press OK. Then restart your Windows Instance.

After your Windows Instance restarts, EC2Config will execute the userData (bootstrap script) and then it will automatically remove the tick from the above option so that the userData is not executed on subsequent restarts (or service starts)

There you go. A simple way to re-run your bootstrap scripts on an AWS Windows Instance without having to terminate the current instance and launching a new one.

There are other options available in the EC2ConfigService Settings that you can explore as well 🙂

 

Usable IP Addresses in an Azure {and AWS} Subnet

Over the last week I got asked an interesting question.

“Why is it that you always give x.x.x.4 ip address to you first Azure virtual machine? Why don’t you start with x.x.x.1 ?”

This is a very interesting question, and needs to be kept in mind whenever transitioning from On-Premise to Azure. In Azure, there are a few IP addresses that are reserved for system use and cannot be allocated to virtual machines.

The first and last IP addresses of a subnet have always been unavailable for machine addressing because the first IP address is the network address and the last is the broadcast address for the subnet.

In addition to the above, the next 3 IP addresses from the beginning are used by Azure for internal use.

Always keep the above in mind when allocating IP addresses in Azure.

Below are some helpful links that can provide more information

https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-faq/

https://www.petri.com/understanding-ip-addressing-microsoft-azure

I came across a similar article for AWS. AWS also removes 5 IP addresses from the pool, for internal use. However, this article was more informative in regards to why these IP addresses are unavailable. I have a suspicion that Azure has the same reasons, however I couldn’t find any article on it.

Here is the AWS article http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

and below is the section that describes why the IP addresses are unavailable in AWS

The first four IP addresses and the last IP address in each subnet CIDR block are not available for you to use, and cannot be assigned to an instance. For example, in a subnet with CIDR block 10.0.0.0/24, the following five IP addresses are reserved:

  • 10.0.0.0: Network address.
  • 10.0.0.1: Reserved by AWS for the VPC router.
  • 10.0.0.2: Reserved by AWS for mapping to the Amazon-provided DNS. (Note that the IP address of the DNS server is the base of the VPC network range plus two. For more information, see Amazon DNS Server.)
  • 10.0.0.3: Reserved by AWS for future use.
  • 10.0.0.255: Network broadcast address. We do not support broadcast in a VPC, therefore we reserve this address.