Creating a Contact Center in minutes using Amazon Connect

Background

In my previous blog (https://nivleshc.wordpress.com/2019/10/09/managing-amazon-ec2-instances-using-amazon-ses/), I showed how we can manage Amazon EC2 instances using emails. However, what if you wanted to go further than that? What if, instead of sending an email, you instead wanted to dial in and check the status of or start/stop your Amazon EC2 instances?

In this blog, I will show how I used the above as a foundation to create my own Contact Center. I enriched the experience by including an additional option for the caller, to be transferred to a human agent. All this in minutes! Still skeptical? Follow on and I will show you how I did all of this using Amazon Connect.

High Level Solution Design

Below is the high-level solution design for the Contact Center I built.

The steps (as denoted by the numbers in the diagram above) are explained below

  1. The caller dials the Direct Inward Dial (DID) number associated with the Amazon Connect instance
  2. Amazon Connect answers the call
  3. Amazon Connect invokes the AWS Lambda function to authenticate the caller.
  4. The AWS Lambda function authenticates the caller by checking their callerID against the entries stored in the authorisedCallers DynamoDB table. If there is a match, the first name and last name stored against the callerID is returned to Amazon Connect. Otherwise, an “unauthorised user” message is returned to Amazon Connect.
  5. If the caller is unauthorised, Amazon Connect informs them of this and hangs up the call.
  6. If the caller is authorised, Amazon Connect uses the first name and last name provided by AWS Lambda function to personalise a welcome message for them. Amazon Connect then provides the caller with two options:
      •  (6a) press 1 to get the status of the Amazon EC2 instances. If this is pressed, Amazon Connect invokes an AWS Lambda function to get the status of the Amazon EC2 instances and plays the results to the caller
      • (6b) press 2 to talk to an agent. If this is pressed, Amazon Connect places the call in a queue,  where it will be answered by the next available agent

     

Preparation

My solution requires the following components

  • Amazon DynamoDB table to store authorised callers (an item in this table will have the format phonenumber, firstname,  lastname)
  • AWS Lambda function to authenticate callers
  • AWS Lambda function to get the status of all Amazon EC2 instances in the region

I created the following AWS CloudFormation template to provision the above resources.

The above AWS CloudFormation template can be downloaded from https://gist.github.com/nivleshc/926259dbbab22dd4890e0708cf488983

Implementation

Currently AWS CloudFormation does not support Amazon Connect. The implementation must be done manually.

Leveraging on my own experience with setting up Amazon Connect solutions,  I observed that there are approximately three stages that are required to get a Contact Center up and running. These are:

  • Provisioning an Amazon Connect instance – this is straight forward and essentially is where an Amazon Connect instance is provisioned and made ready for your use
  • Configuring the Amazon Connect instance – this contains all the tasks to customise the Amazon Connect instance. It includes the configuration of the Direct Inward Dial (DID), hours or operations for the Contact Center, Routing profiles, users etc
  • Creating a custom Contact flow – a Contact flow defines the customer experience of your Contact Center, from start to finish. Amazon Connect provides a few default Contact flows however it is highly recommended that you create one that aligns with your own business requirements.

Follow along and I will show you how to go about setting up each of the above mentioned stages.

Provision the Amazon Connect Instance

  1. From the AWS Console, open the Amazon Connect service. Select the Sydney region (or a region of your choice, however do keep in mind that at the moment, Amazon Connect is only available in a few regions)
  2. Enter an Access URL for your Amazon Connect Instance. This URL will be used to access the Amazon Connect instance once it has been provisioned.
  3. In the next screen, create an administrator account for this Amazon Connect instance
  4. The next prompt is for Telephony options. For my solution, I selected the following:
    1. Incoming calls: I want to handle incoming calls with Amazon Connect
    2. Outgoing calls: I want to make outbound calls with Amazon Connect
  5. In the next screen, Data Storage options are displayed. For my solution, I left everything as default.
  6. In the next screen, review the configuration and then click Create instance

Configure the Amazon Connect Instance

After the Amazon Connect instance has been successfully provisioned, use the following steps to configure it:

  1. Claim a phone number for your Amazon Connect Instance. This is the number that users will be calling to interact with your Amazon Connect instance (for claiming non toll free local numbers, you must open a support case with AWS, to prove that you have a local business in the country where you are trying to claim the phone number. Claiming a local toll-free number is easier however it is more expensive)
  2. Create some Hour of operation profiles. These will be used when creating a queue
  3. Create a queue. Each queue can have different hours of operation assigned
  4. Create a routing profile. A user is associated with a routing profile, which defines their inbound and outbound queues.
  5. Create users. Once created, assign the users to a predefined security profile (administrator, agent etc) and also assign them to a specific routing profile

Create a custom Contact flow

A Contact flow defines the customer experience of your Contact Center, from start to finish. By default, Amazon Connect provides a few Contact flows that you can use. However, it is highly recommended that you create one that suits your own business requirements.

To create a new Contact flow, follow these steps:

  • Login to your Amazon Connect instance using the Access URL and administrator account (you can also access your Amazon Connect instance using the AWS Console and then click on Login as administrator)
  • Once logged in, from the left-hand side menu, go to Routing and then click on Contact flows
  • In the next screen, click on Create contact flow
  • Use the visual editor to create your Contact flow

Once the Contact flow has been created, attach it to your Direct Inward Dial (DID) phone number by using the following steps:

  • from the left-hand side menu, click on Routing and then Phone numbers.
  • Click on the respective phone number and change its Contact flow / IVR to the Contact flow you want to attach to this phone number.

Below is a screenshot of the Contact flow I created for my solution. It shows the flow logic I used and you can easily replicate it for your own environment. The red rectangles show where the AWS Lambda functions (mentioned in the pre-requisites above) are used.

This is pretty much all that is required to get your Contact Center up and running. It took me approximately thirty minutes from start to finish (this does not include the time required to provision the Amazon DynamoDB tables and AWS Lambda functions). However, I would recommend spending time on your Contact flows as this is brains of the operation. This must be done in conjunction with someone who understands the business really well and knows the outcomes that must be achieved by the Contact Center solution. There is a lot that can be done here and the more time you invest in your Contact flow, the better outcomes you will get.

The above is just a small part of what Amazon Connect is capable of. For its full set of capabilities, refer to https://aws.amazon.com/connect/

So, if you have been dreaming of building your own Contact Center, however were worried about the cost or effort required? Wait no more! You can now easily create one in minutes using Amazon Connect and pay for only what you use and tear it down if you don’t need it anymore. However, before you start, I would strongly recommend that you get yourself familiar with the Amazon Connect pricing model. For example – you get charged a daily rate for any claimed phone numbers that are attached to your Amazon Connect Instance (this is similar to a phone line rental charge). Full pricing is available at https://aws.amazon.com/connect/pricing/).

I hope the above has given you some insights into Amazon Connect. Till the next time, Enjoy!

Managing Amazon EC2 Instances using Amazon SES

Background

Most people know Amazon Simple Email Service (SES) just as a service for sending out emails. However, did you know that you can use it to receive emails as well? If this interests you, more information is available at https://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email.html.

In this blog I will show how I provisioned a solution to manage my Amazon EC2 instances using emails. The solution uses Amazon SES and AWS Lambda. Now, some of you might be saying, can’t you just use the AWS console or app for this? Well, yes you can, however for me personally, logging into an AWS console just to get the status of my Amazon EC2 instances, or to start/stop them was more effort than I deemed necessary. The app surely makes this task trivial, however the main purpose of this blog is to showcase the capabilities of Amazon SES

Solution Architecture

Below is the high-level design for my solution.

The individual steps (labelled using numbers) are described below

  1. The admin sends an email to an address attached to an Amazon SES rule
  2. Amazon SES receives the email, performs a spam and virus check. If the email passes the check, Amazon SES invokes the manageInstances AWS Lambda function, passing the contents of the email to it (unfortunately the contents of the body are not passed)
  3. The manageInstances AWS Lambda function authenticates the sender based on the from address (this is a very rudimentary authentication system. A stronger authentication mechanism must be used if this solution is to be deployed in a production environment – maybe include a multi-factor authentication system). It extracts the command from the email’s subject and executes it
  4. The manageInstances AWS Lambda function uses Amazon SES to send the response of the command back to the admin
  5. Amazon SES delivers the email containing the command’s output to the admin

Prerequisites

To use Amazon SES for receiving incoming emails, first verify your domain within it and then point your domain’s DNS MX entry to your region’s Amazon SES endpoint. Full Instructions to carry this out can be found at https://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-getting-started.html

Implementation

Lambda Function

The Lambda function is created first as it is required for the Amazon SES Rules.

The Lambda function carries out the following tasks

  • authenticates the sender by comparing the email’s from address with a predefined list of approved senders. To prevent a situation where the Lambda function can be inadvertently used as a spam bot, all emails from senders not on the approved list will be dropped.
  • checks if the specified command is in the list of commands that is currently supported. If yes, then the command is executed, and the output sent back to the admin. If the command is unsupported, a reply stating that an invalid command was specified is sent back to the admin

Here are the attributes of the Lambda function I created

Function name: manageInstances
Runtime:             Python 3.6
Region:                North Virginia (us-east-1) This is to ensure that the Lambda function and the Amazon SES rules are in the same region
Role:                     The role that is used by the Lambda function must have the following permissions attached to it

             ec2:DescribeInstances
             ec2:StartInstances
             ec2:StopInstances
             SES:SendEmail

Here is the code for the AWS Lambda function (set the approvedSenders list to contain the email address of approved senders)

The AWS Lambda function code can be downloaded from https://gist.github.com/nivleshc/f9b32a14d9e662701c3abcbb8f264306

Amazon SES Email Receiving Rule

Next, the Amazon SES Email Receiving rule that handles the incoming emails must be created. Please note that currently Amazon SES is supported in a few regions only. For my solution, I used the North Virginia (us-east-1) region.

Below are the steps to create the rule

  • Open Simple Email Service service page from within the AWS console (ensure you are in the correct AWS region)
  • From the left-hand side menu, navigate down to the Email Receiving section and then click Rule Sets.
  • The right-hand side of the screen will show the currently defined rule sets. I used the predefined default-rule-set.
  • Click on View Active Rule Set and then in the next screen click Create Rule.
  • In the next screen, for the recipient address, enter the email address to which emails will be sent to, to carry out the commands (the email domain has to correspond to the domain that was verified with Amazon SES, as part of the prerequisites mentioned above)
  • In the next screen, for actions, select Lambda.
  • Select the name of the Lambda function that was created to manage the instances from the drop down (for me, this was manageInstances)
  • Ensure the Invocation type is set to Event.
  • You do not need to set the SNS topic, however if you need to know when this Amazon SES action is carried out, select the appropriate SNS topic (you will need to create an SNS topic and subscribe to it using your email address)
  • Click Next Step.
  • In the next screen, provide a name for the rule. Ensure the options Enabled and Enable spam and virus scanning are ticked.
  • Click Next Step and then review the settings.
  • Click Create Rule.

Usage

The solution, once implemented supports the following commands

help   - provides information about the commands and their syntax
status - provides the status of all Amazon EC2 instances in the region that the Lambda function is running in. It lists the instance-id and name of those instances (name is derived from the tag with the key Name)
start {instance-id} - starts the Amazon EC2 instance that has the specified instance-id
stop {instance-id}  - stops the Amazon EC2 instance that has the specified instance-id

To use, send an email from an approved sender’s email to the email address attached to Amazon SES.

The table below shows, what the subject must be, for each command.

Command Subject
Help help
Status for instances in us-east-1 status
Start an instance with instance-id i-0e7e011b42e814465 start i-0e7e011b42e814465
Stop an instance with instance-id i-0e7e011b42e814465 stop i-0e7e011b42e814465

The output of the status command is in the following format

<instance-id> <instance-name> <status> <private-ip> <public-ip>

for example
i-03a1ab124f554z805 LinuxServer01 Running 172.16.31.10 52.10.100.34

The only problem with the solution is that all commands are performed on Amazon EC2 instances running in the same AWS region as the Lambda function. What if you wanted to carry out the commands on another region?

For the keen eyed, you would have spotted the Easter egg I hid in the Lambda function code. Here is what the subject must be if the command is to be carried out in an AWS region different to where the Lambda function is running (simply provide the AWS region at the end of the command)

Command Subject
Help help
Status for instances in ap-southeast-2 (Sydney) status ap-southeast-2
Start an instance with instance-id i-0e7e011b42e814465 in ap-southeast-2 (Sydney) region start i-0e7e011b42e814465 ap-southeast-2
Stop an instance with instance-id i-0e7e011b42e814465 in ap-southeast-2 (Sydney) region stop i-0e7e011b42e814465 ap-southeast-2

There you go! Now you can keep an eye on and control your Amazon EC2 instances with just your email.

A good use case can be when you are commuting and need to RDP into your Windows Amazon EC2 instance from your mobile (I am guilty of doing this at times). You can quickly start the Amazon EC2 instance, get its public ip address, and then connect using RDP.  Once finished, you can shut down the instance to ensure you don’t get charged after that.

I hope this blog was useful to you. Till the next time, Enjoy!

Using Amazon Alexa to drive a radio-controlled car – Part 1

Introduction

Over the Easter holidays, while watching my son play with his radio-controlled toy car, I had a strange thought pop into my head. Instead of using the sticks on the remote control, won’t it be cool to control the car by using just your voice? You could tell the car to move forward, backward, left or right. What if you could save all the moves you have asked the car to take so far and then at a later time, get the car to replay all those moves?

Now, that would be a car I would love to play with!

In this blog, I will introduce the high-level design for accomplishing the above-mentioned goal. Then over the next few blogs I will take you through the steps to transform the high-level design into a working prototype.

Hardware Requirements

For this prototype, I settled on using the following hardware devices

  • Amazon Echo Dot – this will be used to process my voice commands
  • Raspberry Pi 3 with a GPIO expansion Breadboard
  • A set of four 5v Relay Board Module
  • A radio-controlled race car
  • A soldering iron, solder wire and a digital multimeter

Design considerations

To make the prototype work, I decided to create an Amazon Alexa Skill called race car. This will be used to process my voice commands.

Challenge #1: How would I control the radio-controlled car?

I found two options for this

1. Completely bypass the remote control and send the radio frequency instructions directly to the race car

2. Emulate the button presses on the remote control so that it “thinks” someone is pressing those buttons and then it sends the appropriate radio frequency instructions to the race car

Option Chosen: I chose option 2 because it required the least amount of work. For this option, the only thing I needed to figure out was what happened when a button was pressed. After some experimentation, I found the contacts on the printed circuit board (PCB) of the remote control which I could open and close the contacts on, to emulate the button presses.

Challenge #2: I will use a python script running on a Raspberry Pi 3 within my home network to emulate the button presses on the remote control. How will I get the Amazon Alexa Skill to connect to my Raspberry Pi 3 which is running on my internal home network?

Solution: I found a neat trick at https://developer.amazon.com/blogs/post/Tx14R0IYYGH3SKT/flask-ask-a-new-python-framework-for-rapid-alexa-skills-kit-development . To expose my internal Raspberry Pi 3 python script to the Amazon Alexa Skill, I will use ngrok (https://ngrok.com) to create a secure tunnel between my Raspberry Pi 3 and the ngrok service. This provides me with an HTTPS endpoint within ngrok’s domain, which forwards any requests directed at the ngrok endpoint to the python script running on my internal Raspberry Pi 3 using the secure tunnel.

High Level Design for the prototype

Using the above-mentioned design considerations, the below schematic was developed to create the prototype.

Let’s go through each of the steps (denoted by the numbers) to better understand the design.

1. The user will invoke the race car Amazon Alexa Skill and ask to either move the car in a certain direction, save all the movements that have been requested so far, or run a previously saved set of movements.

2. The Alexa device (Amazon Echo Dot) will record the audio from the user and send it to the Alexa Cloud for processing. Alexa Cloud converts the audio into JSON using Natural Language Processing (NLP). Based on the invocation name, it will pass the JSON file to the race car Amazon Alexa Skill.

3. The race car Amazon Alexa Skill will check to ensure that the intent supplied by the user is valid. Once confirmed, the race car Amazon Alexa Skill will pass the JSON to the endpoint defined for the skill. In our case, this is an endpoint that is hosted at ngrok (https://ngrok.com)

4. The ngrok endpoint will receive the JSON file from the race car Amazon Alexa Skill and then forward it using the secure tunnel to the python script running on the Raspberry Pi 3 within the home network. The python script will use the Flask-Ask framework to process the intents from the Alexa Skills Kit (more information for Flask-Ask can be obtained from https://flask-ask.readthedocs.io/en/latest/)

5. If the user requested to save all the car movements carried out so far, then the python script will write the movements to a table within Amazon DynamoDB.

6. If the user requested to load a previously saved set of movements, then the python script will read the movements from the table within Amazon DynamoDB.

7. If the use requested to either load a previously saved set of movements or to move the car in a certain direction, the python script will emulate the appropriate button presses on the remote control.

8. The remote control will translate the emulated button presses into radio frequency instructions and send them to the car. The car will receive these instructions and move accordingly.

To give you a sneak peek of the prototype, checkout the video at https://youtu.be/4SMYDhuri0Q (there are some minor bugs with the car movement which I intend on getting fixed as soon as possible).

In the next blog in this series, we will go through the process of “hacking” the remote control and also setting up the Raspberry Pi 3 ancillary hardware.

I hope to see you then.

Till then, enjoy!

Building a Breakfast Ordering Skill for Amazon Alexa – Part 1

Introduction

At the AWS Summit Sydney this year, Telstra decided to host a breakfast session for some of their VIP clients. This was more of a networking session, to get to know the clients much better. However, instead of having a “normal” breakfast session, we decided to take it up one level 😉

Breakfast ordering is quite “boring” if you ask me 😉 The waitress comes to the table, gives you a menu and asks what you would like to order. She then takes the order and after some time your meal is with you.

As it was AWS Summit, we decided to sprinkle a bit of technical fairy dust on the ordering process. Instead of having the waitress take the breakfast orders, we contemplated the idea of using Amazon Alexa instead 😉

I decided to give the Alexa skill development a go. However, not having any prior Alexa skill development experience, I anticipated an uphill battle, having to first learn the product and then developing for it. To my amazement, the learning curve wasn’t too steep and over a weekend, spending just 12 hours in total, I had a working proof of concept breakfast ordering skill ready!

Here is a link to the proof of concept skill https://youtu.be/Z5Prr31ya10

I then spent a week polishing the Alexa skill, giving it more “personality” and adding a more “human” experience.

All the work paid off when I got told that my Alexa skill would be used at the Telstra breakfast session! I was over the moon!

For the final product, to make things even more interesting, I created a business intelligence chart using Amazon QuickSight, showing the popularity of each of the food and drink items on the menu. The popularity was based on the orders that were being received.

BothVisualsSidebySide

Using a laptop, I displayed the chart near the Amazon Echo Dot. This was to help people choose what food or drink they wanted to order (a neat marketing trick 😉 ) . If you would like to know more about Amazon QuickSight, you can read about it at Amazon QuickSight – An elegant and easy to use business analytics tool

Just as a teaser, you can watch one of the ordering scenarios for the finished breakfast ordering skill at https://youtu.be/T5PU9Q8g8ys

In this blog, I will introduce the architecture behind Amazon Alexa and prepare you for creating an Amazon Alexa Skill. In the next blog, we will get our hands dirty with creating the breakfast ordering Alexa skill.

How does Amazon Alexa actually work?

I have heard a lot of people use the name “Alexa” interchangeably for the Amazon Echo devices. As good as it is for Amazon’s marketing team, unfortunately, I have to set the records straight. Amazon Echo are the physical devices that Amazon sells that interface to the Alexa Cloud. You can see the whole range at https://www.amazon.com/Amazon-Echo-And-Alexa-Devices/b?ie=UTF8&node=9818047011. These devices don’t have any smarts in them. They sit in the background listening for the “wake” command, and then they start streaming the audio to Alexa Cloud. Alexa Cloud is where all the smarts are located. Using speech recognition, machine learning and natural language processing, Alexa Cloud converts the audio to text. Alexa Cloud identifies the skill name that the user had requested, the intent and any slot values it finds (these will be explained further in the next blog). The intent and slot values (if any) are passed to the identified skill. The skill uses the input and processes it using some form of compute (AWS Lambda in my case) and then passes the output back to Alexa Cloud. Alexa Cloud, converts the skill output to Speech Synthesis Markup Language (SSML) and sends it to the Amazon Echo device. The device then converts the SSML to audio and plays it to the user.

Below is an overview of the process.

alexa-skills-kit-diagram._CB1519131325_

Diagram is from https://developer.amazon.com/blogs/alexa/post/1c9f0651-6f67-415d-baa2-542ebc0a84cc/build-engaging-skills-what-s-inside-the-alexa-json-request

Getting things ready

Getting an Alexa enabled device

The first thing to get is an Alexa enabled device. Amazon has released quite a few different varieties of Alexa enabled devices. You can checkout the whole family here.

If you are keen to try a side project, you can build your own Alexa device using a Raspberry Pi. A good guide can be found at https://www.lifehacker.com.au/2016/10/how-to-build-your-own-amazon-echo-with-a-raspberry-pi/

You can also try out EchoSim (Amazon Echo Simulator). This is a browser-based interface to Amazon Alexa. Please ensure you read the limits of EchoSim on their website. For instance, it cannot stream music

For developing the breakfast ordering skill, I decided to purchase an Amazon Echo Dot. It’s a nice compact device, which doesn’t cost much and can run off any usb power source. For the Telstra Breakfast session, I actually ran it off my portable battery pack 😉

Create an Amazon Account

Now that you have got yourself an Alexa enabled device, you will need an Amazon account to register it with. You can use one that you already have or create a new one. If you don’t have an Amazon account, you can either create one beforehand by going to https://www.amazon.com or you can create it straight from the Alexa app (the Alexa app is used to register the Amazon Echo device).

Setup your Amazon Echo Device

Use the Alexa app to setup your Amazon Echo device. When you login to the app, you will be asked for the Amazon Account credentials. As stated above, if you don’t have an Amazon account, you can create it from within the app.

Create an Alexa Developer Account

To create skills for Alexa, you need a developer account. If you don’t have one already, you can create one by going to https://developer.amazon.com/alexa. There are no costs associated with creating an Alexa developer account.

Just make sure that the username you choose for your Alexa developer account matches the username of the Amazon account to which your Amazon Echo is registered to. This will enable you to test your Alexa skills on your Amazon Echo device without having to publish it on the Alexa Skills Store (the skills will show under Your Skills in the Alexa App)

Create an AWS Free Tier Account

In order to process any of the requests sent to the breakfast ordering Alexa skill, we will make use of AWS Lambda. AWS Lambda provides a cheap and cost-effective way to run code due to the fact that you are only charged for the time that the code is run. There are no costs for any idle time.

If you already have an AWS account, you can use that otherwise, you can sign up for an AWS Free tier account by going to https://aws.amazon.com . AWS provides a lot of services for free for the first 12 months under the Free Tier, with some services continuing the free tier allowance even beyond the 12 months (AWS Lambda is one such). For a full list of Free Tier services, visit https://aws.amazon.com/free/

High Level Architecture for the Breakfast Ordering Skill

Below is the architectural overview for the Breakfast Ordering Skill that I built. I will introduce you to the various components over the next few blogs.Breakfast Ordering System_HighLevelArchitecture

In the next blog, I will take you through the Alexa Developer console, where we will use the Alexa Skills Kit (ASK) to start creating our breakfast ordering skill. We will define the invocation name, intents, slot names for our Alexa Skill. Not familiar with these terms? Don’t worry,  I will explain them in the next blog.  I hope to see you there.

See you soon.

 

Deploying an Active Directory Forest using AWS CloudFormation

Introduction

Wow, it is amazing how time flies. Almost two years ago, I wrote a set of blogs that showed how one can use Azure Resource Manager (ARM) templates and Desired State Configuration (DSC) scripts to deploy an Active Directory Forest automatically.

For those that would like to take a trip down memory lane, here is the link to the blog.

Recently, I have been playing with AWS CloudFormation and I am simply in awe by its power. For those that are not familiar with AWS CloudFormation, it is a tool, similar to Azure Resource Manager, that allows you to “code” your computing infrastructure in Amazon Web Services. Long gone are the days when you would have to sit down, pressing each button and choosing each option to deploy your environment. Cloud computing provides you with a way to interface with the fabric, so that you can script the build of your environment. The benefits of this are enormous. Firstly, it allows you to standardise all your builds. Secondly, it allows you to have a live as-built document (the code is the as-built document). Thirdly, the code is re-useable. Most important of all, since the deployment is now scripted, you can automate it.

In this blog I will show you how to create an AWS CloudFormation template to deploy an AWS Elastic Compute Cloud (EC2) Windows Server instance. The template will also include steps to promote the EC2 instance to a Domain Controller in a new Active Directory Forest.

Guess what the best part is? Once the template has been created, all you will have to do is to load it into AWS CloudFormation, provide a few values and sit back and relax. AWS CloudFormation will do everything for you from there on!

Sounds interesting? Lets begin.

Creating the CloudFormation Template

A CloudFormation template starts with a definition of the parameters that will be used. The person running the template (lets refer to them as an operator) will be asked to provide the values for these parameters.

When defining a parameter, you will provide the following

  • a name for the parameter
  • its type
  • a brief description for the parameter so that the operator knows what it will be used for
  • any constraints you want to put on the parameter, for instance
    • a maximum length (for strings)
    • a list of allowed values (in this case a drop down list is presented to the operator, to choose from)
  • a default value for the parameter

For our template, we will use the following parameters.

Next, we will define some mappings. Mappings allow us to define the values for variables, based on what value was provided for a parameter.

When creating EC2 instances, we need to provide a value for the Amazon Machine Image (AMI) to be used. In our case, we will use the OS version to decide which AMI to use.

To find the subnet into which the EC2 instance will be deployed in, we will use the Environment and AvailabilityZone parameters to find it.

The code below defines the mappings we will use

The next section in the CloudFormation template is Resources. This defines all resources that will be created.

If you have any experience deploying Active Directory Forests, you will know that it is extremely simple to do it using PowerShell scripts. Guess what, we will be using PowerShell scripts as well 😉 Now, after the EC2 instance has been created, we need to provide the PowerShell scripts to it, so that it can run them. We will use AWS Simple Storage Service (S3) buckets to store our PowerShell scripts.

To ensure our PowerShell scripts are stored securely, we will allow access to it only via a certain role and policy.

The code below will create an AWS Identity and Access Management (IAM) role and policy to access the S3 Bucket where the PowerShell scripts are stored.

We will use cf-init to do all the heavy lifting for us, once the EC2 instance has been created. cf-init is a utility that is present by default in EC2 instances and we can ask it to perform tasks for us.

To trigger cf-init, we will use the Userdata feature of EC2 instance provisioning. cf-init, when started, will check the EC2 Metadata for the credentials it will use, and it will also check it for all the tasks it needs to perform.

Below is the metadata that will be used. For simplicity, I have hardcoded the URL to the files in the S3 bucket.

As you can see, I have first defined the role that cf-init will use to access the S3 bucket. Next, the following tasks will be carried out, in the order defined in the configuration set

  • get-files
    • it will download the files from S3 and place them in the local directory c:\s3-downloads\scripts.
  • configure-instance (the commands in this section are run in alphabetical order, that is why I have prefixed them with a number, to ensure it follows the order I want)
    • It will change the execution policy for PowerShell to unrestricted (please note that this is just for demonstration purposes and the execution policy should not be made this relaxed).
    • next, the name of the server will be changed to what was provided in the Parameters section
    • the following Windows Components will be installed (as defined in the Add-WindowsComponents.ps1 script file)
      • RSAT-AD-PowerShell
      • AD-Domain-Services
      • DNS
      • GPMC
    • the Active Directory Forest will be created, using the Configure-ADForest.ps1 script and the values provided in the Parameters section

In the last part of the CloudFormation template, we will provide the UserData information that will trigger cfn-init to run and do all the configuration. We will also tag the the EC2 instance, based on values from the Parameters section.

For simplicity, I have hardcoded the security group that will be attached to the EC2 instance (this is defined as GroupSet under NetworkInterfaces). You can easily create an additional parameter for this, if you want.

Finally, our template will output the instance’s hostname, environment it has been created in and its privateip. This provides an easy way to identify the EC2 instance once it has been created.

Below is the last part of the template

Now all you have to do is login to AWS CloudFormation, load the template we have created, provide the parameter values and sit back and relax.

AWS CloudFormation will take it from here and do everything for you 😉

How easy was that? Magic 🙂

The complete CloudFormation template is available at https://gist.github.com/nivleshc/867b1a2ca119c7d22cf215b5a9a5de02

The two PowerShell Scripts that are used in the CloudFormation template can be downloaded using the links below

Add-WindowsComponents.ps1

Configure-ADForest.ps1

For anyone deploying an Active Directory Forest in AWS, I hope the above comes in handy.

Enjoy 😉

Amazon QuickSight – An elegant and easy to use business analytics tool

Introduction

Recently, I had a requirement for a tool to visualise some data I had collected. My requirements were very simple. I didn’t want something that would cost me a lot, and at the same time I wanted the reports to be elegant and informative. Most of all, I didn’t want to have to go through pages and pages of documentation to learn how to use it.

As my data was within Amazon Web Services (AWS), I thought to check if AWS had any such offerings. Guess what, there was indeed a tool just for what I wanted, and after using it, I was amazed at how simple and elegant it is.

In this blog, I will show how you can easily get started with Amazon QuickSight. I will take you through the steps to import your data into Amazon QuickSight and then create some informative visualisations.

Some background on Amazon QuickSight

Pricing

Amazon QuickSight is very inexpensive, infact, if your data is not too much, you won’t have to pay anything!

For standard edition use, Amazon QuickSight provides 1GB of SPICE for the first user free per month. SPICE is an acronym for Super-fast, Parallel, In-memory, Calculation Engine and it uses a combination of columnar storage, in-memory technologies enabled through the latest hardware innovations, machine code generation, and data compression to allow users to run interactive queries on large datasets and get rapid responses.  SPICE is the calculation engine that Amazon QuickSight uses.

Any additional SPICE is priced at $USD0.25 per GB/month. For the latest pricing, please refer to https://aws.amazon.com/quicksight/#Pricing

Data Sources

Currently Amazon QuickSight supports the following data sources

  • Relational Data Sources
    • Amazon Athena
    • Amazon Aurora
    • Amazon Redshift
    • Amazon Redshift Spectrum
    • Amazon S3
    • Amazon S3 Analytics
    • Apache Spark 2.0 or later
    • Microsoft SQL Server 2012 or later
    • MySQL 5.1 or later
    • PostgreSQL 9.3.1 or later
    • Presto 0.167 or later
    • Snowflake
    • Teradata 14.0 or later
  • File Data Sources
    • CSV/TSV – (comma separated, tab separated value text files)
    • ELF/CLF – Extended and common log format files
    • JSON – Flat or semi-structured data files
    • XLSX – Microsoft Excel files

Unfortunately, currently Amazon DynamoDB is not supported as a native data source. Since my data is in Amazon DynamoDB, I had to write some custom lambda functions to export it to a csv file, so that it could be imported into Amazon QuickSight.

Ok, time for that walk-through I promised earlier.  For this blog, I will be using an S3 bucket as my data source. It will contain the CSV files that I will use for analysis in Amazon QuickSight.

Step 1 – Create S3 buckets

If you haven’t already done so, create an S3 bucket that will contain the csv files. The S3 bucket does not have to be publicly accessible. Once created, upload the csv files into the S3 bucket.

In my case, the csv file is called orders.csv and its location is https://s3.amazonaws.com/sample/orders.csv (to get the URL to your S3 file, login to the S3 console and navigate to the S3 bucket that contains the file. Click the S3 bucket to open it, then click the file name to open its properties. Under Overview you will see Link. This is the URL to the file)

Step 2 – Create an Amazon QuickSight Account

Before you start using Amazon QuickSight, you must create an account. Unfortunately, I couldn’t find a way for creating an Amazon QuickSight account without creating an Amazon AWS account. If you don’t have an existing Amazon AWS account, you can create an AWS Free Tier account. Once you have got an AWS account, go ahead and create an Amazon QuickSight account at https://aws.amazon.com/quicksight/.

While creating your Amazon QuickSight account, you will be asked if you would like Amazon QuickSight to auto-discover your Amazon S3 buckets. Enable this and then click to Choose S3 buckets. Choose the S3 bucket that you created in Step 1 above. This will give Amazon QuickSight read-only access to the S3 bucket, so that it can read the data for analysis.

Step 3 – Create a manifest file

A manifest file is a JSON file that provides the location and format of the data files to Amazon QuickSight. This is required when creating a data set for S3 data sources. Please refer to https://docs.aws.amazon.com/quicksight/latest/user/supported-manifest-file-format.html if you would like more information about manifest files.

Below is my manifest file, which I have affectionately named ordersmanifest.json.

{
   "fileLocations": [
      {
         "URIs": [
            "https://s3.amazonaws.com/sample/orders.csv"
         ]
      },
   ],
   "globalUploadSettings": {
      "format": "CSV",
      "delimiter": ",",
      "textqualifier": "'",
      "containsHeader": "true"
   }
}

Once created, upload the manifest file into the same S3 bucket as to where the csv file is stored.

Step 4 – Create a data set

  • Login to your Amazon QuickSight account. From the top right, click on Manage data
  • In the next screen, click on New data set
  • In the next screen, for Create a Data Set FROM NEW DATA SOURCES, click on S3
  • In the next screen
    • provide a name for the data source
    • for Upload a manifest file ensure URL is clicked and enter the URL to the manifest file (you can get the url by logging into the S3 console, and then clicking on the manifest file to reveal its properties. Under the Overview tab, you will see Link. This is the URL to the manifest file).NewS3DataSource
    • Click Connect
    • Amazon QuickSight will now read the manifest file and then import the csv file to SPICE. You will see the following screenFinishDataSetCreation
    • Click on Edit/Preview data.
    • In the next screen, you will see the contents of the data file that was imported, along with the Fields name on the left. If you want to exclude any columns from the analysis, simply untick them (I unticked orderTime (S) since I didn’t need it) EditPreviewDataSet
    • By default, the data is called Group 1. To customise the name, replace Group 1 with a text of your choice (I have renamed my data to Orders Data)RenameGroup1Label
    • Click Save & visualize from the top menu

Step 5 – Create Visualisations

Now that you have imported the data into SPICE, you can start analysing it and creating visualisations.

After step 4, you should be in the Analysis section.

  • Depending on which visualisation you want, you can select the respective type under Visual types from the bottom left hand side of the screen. For my visualisations, I chose Pie Chart (side note – you will notice that orderTime (S)  isn’t listed under Fields list. This is because we had unticked it in the previous screen)OrdersDataAnalysis-01
  • I want to create two Pie Charts, one to show me analysis about what is the most popular foodName and another to find out what is the most popular drinkName. For the first Pie Chart, drag foodName (S) from the Fields list to the Value – Add a measure here box  in the top of the screen. Then drag foodName (S) from the Fields list to the Group/Color – Add a dimension here box in the top of the screen. You will see the followingOrdersDataAnalysis-02
  • You can customise the visualisation title Count of Foodname (S) by Foodname (S) by clicking it and then changing the text (I have changed the title to Popularity of Food Types)FoodNamePopularity
  • If you look closely, the legend on the right hand side doesn’t serve much purpose since the pie slices are already labelled quite well. You can also get rid of the legend and get more space for your visual. To do this, click on the down arrow above FoodName (S) on the right and then select Hide legend FoodNameHideLengend
  • Next, lets create a Pie Chart visualisation for drinkName. From the top menu, click on Add and then Add visual drinkNameAddVisual
  • You will now have another Canvas at the bottom of the first Pie Chart. Click this new canvas area to select it (a blue border will appear to show that it is selected). From Visual types at the bottom left hand side, click on the Pie Chart visual. Then from the top, click on Field wells to expose the Value and Group/Color boxes for the second canvas drinkNameCanvas
  • From the Field list on the left, drag drinkName (S) to the Value – Add a measure here box  in the top of the screen. Then drag drinkName (S) from the Fields list to the Group/Color – Add a dimension here box in the top of the screen. You will now see the following foodanddrinkvisual
  • We are almost done. I actually want the two Pie Charts to sit side by side, instead of one ontop ofthe other. To do this, I will show you a neat trick. In each of the visuals, at the bottom right border, you will see two diagonal lines. If you move your mouse pointer over them, they change to a resizing cursor. Use this to resize the visual’s canvas area. Also, in the middle of the top border of the visual, you will see two rows of gray dots. Click your mouse pointer on this and drag to the location you want to move the visual to.VisualResizeandMove
  • I have hidden the legend for the second visual, customised the title and resized both the visuals and moved them side by side. Viola! Below is what I get. Not bad aye!BothVisualsSidebySide

Step 6 – Create a dashboard

Now that the visuals have been created, they can be shared it with others. This can be done by creating a dashboard. A dashboard is a read-only snapshot of the analysis. When you share the dashboard with others, they can view and filter the dashboard data, however any filters applied to the dashboard visual exist only when the user is viewing the dashboard, and aren’t saved once it is closed.

One thing to note about sharing dashboards – you can only share dashboards with users who have an Amazon QuickSight account.

Creating a dashboard is very easy.

  • In the Analysis screen, on the top right corner, click on Share and then select Create dashboardCreateDashboard
  • You can either replace an existing dashboard or create a new one. In our case, since we are creating a new dashboard, select Create a new dashboard as and enter a name for the dashboard. Once finished, click Create dashboardCreateDashboard-Name
  • You will then be asked to enter the username or email address of those you want to share the dashboard with. Enter this and click on Share ShareDashboard
  • That’s it, your dashboard is now created. To access it, go to the Amazon QuickSight home screen (click on the Amazon QuickSight icon on the top left hand side of the screen) and then click on All dashboards. Those that you have shared the dashboard with will also be able to see it once they login to their Amazon QuickSight account.AllDashboards

Step 6 – Refreshing the Data Set

If your data set continually changes, your visualisations/dashboards will not show the updated information. This can be done by refreshing the data set. Doing this will import the new data into SPICE, which will then automatically update the analysis/visualisations and dashboards

Note: you will have to manually reload the webpage to see the updated visualisations and dashboard

There are two ways of refreshing data sets. One is to do it manually while the other is to use a schedule. The scheduled data refresh allows for the data to be automatically refreshed at a certain time daily, weekly or monthly. A maximum of five scheduled refreshes can be configured.

The steps below show how you can manually refresh the data or create schedules to refresh the data

  • From the Amazon QuickSight main screen, click on Manage data from the top left of the screen ManageData
  • In the next screen, you will see all your currently configured data sets. Click the Orders Data dataset (this is the one we had created previously).
  • In the next screen, you will see Refresh Now and Schedule refreshManualScheduleDataRefresh
  • Clicking on Refresh Now will manually refresh the data. Clicking on Schedule refresh will bring up the screen where you can configure a schedule for refreshing the data automatically.

 

That’s it folks! Wasn’t that simple? If you already have an Amazon AWS account, I would strongly recommend giving Amazon QuickSight a try for all your analytics needs. Even if you don’t have an Amazon AWS account, I would still suggest getting an AWS free tier account to try it out.

Enjoy 😉