Building a Breakfast Ordering Skill for Amazon Alexa – Part 1

Introduction

At the AWS Summit Sydney this year, Telstra decided to host a breakfast session for some of their VIP clients. This was more of a networking session, to get to know the clients much better. However, instead of having a “normal” breakfast session, we decided to take it up one level ūüėČ

Breakfast ordering is quite “boring” if you ask me ūüėČ The waitress comes to the table, gives you a menu and asks what you would like to order. She then takes the order and after some time your meal is with you.

As it was AWS Summit, we decided to sprinkle a bit of technical fairy dust on the ordering process. Instead of having the waitress take the breakfast orders, we contemplated the idea of using Amazon Alexa instead ūüėČ

I decided to give the Alexa skill development a go. However, not having any prior Alexa skill development experience, I anticipated an uphill battle, having to first learn the product and then developing for it. To my amazement, the learning curve wasn’t too steep and over a weekend, spending just 12 hours in total, I had a working proof of concept breakfast ordering skill ready!

Here is a link to the proof of concept skill https://youtu.be/Z5Prr31ya10

I then spent a week polishing the Alexa skill, giving it more “personality” and adding a more “human” experience.

All the work paid off when I got told that my Alexa skill would be used at the Telstra breakfast session! I was over the moon!

For the final product, to make things even more interesting, I created a business intelligence chart using Amazon QuickSight, showing the popularity of each of the food and drink items on the menu. The popularity was based on the orders that were being received.

BothVisualsSidebySide

Using a laptop, I displayed the chart near the Amazon Echo Dot. This was to help people choose what food or drink they wanted to order (a neat marketing trick ūüėČ ) . If you would like to know more about Amazon QuickSight, you can read about it at¬†Amazon QuickSight ‚Äď An elegant and easy to use business analytics tool

Just as a teaser, you can watch one of the ordering scenarios for the finished breakfast ordering skill at https://youtu.be/T5PU9Q8g8ys

In this blog, I will introduce the architecture behind Amazon Alexa and prepare you for creating an Amazon Alexa Skill. In the next blog, we will get our hands dirty with creating the breakfast ordering Alexa skill.

How does Amazon Alexa actually work?

I have heard a lot of people use the name “Alexa” interchangeably for the Amazon Echo devices. As good as it is for Amazon’s marketing team, unfortunately, I have to set the records straight. Amazon Echo are the physical devices that Amazon sells that interface to the Alexa Cloud. You can see the whole range at¬†https://www.amazon.com/Amazon-Echo-And-Alexa-Devices/b?ie=UTF8&node=9818047011. These devices don’t have any smarts in them. They sit in the background listening for the “wake” command, and then they start streaming the audio to Alexa Cloud. Alexa Cloud is where all the smarts are located. Using speech recognition, machine learning and natural language processing, Alexa Cloud converts the audio to text. Alexa Cloud identifies the skill name that the user had requested, the intent and any slot values it finds (these will be explained further in the next blog). The intent and slot values (if any) are passed to the identified skill. The skill uses the input and processes it using some form of compute (AWS Lambda in my case) and then passes the output back to Alexa Cloud. Alexa Cloud, converts the skill output to¬†Speech Synthesis Markup Language (SSML) and sends it to the Amazon Echo device. The device then converts the SSML to audio and plays it to the user.

Below is an overview of the process.

alexa-skills-kit-diagram._CB1519131325_

Diagram is from https://developer.amazon.com/blogs/alexa/post/1c9f0651-6f67-415d-baa2-542ebc0a84cc/build-engaging-skills-what-s-inside-the-alexa-json-request

Getting things ready

Getting an Alexa enabled device

The first thing to get is an Alexa enabled device. Amazon has released quite a few different varieties of Alexa enabled devices. You can checkout the whole family here.

If you are keen to try a side project, you can build your own Alexa device using a Raspberry Pi. A good guide can be found at https://www.lifehacker.com.au/2016/10/how-to-build-your-own-amazon-echo-with-a-raspberry-pi/

You can also try out EchoSim (Amazon Echo Simulator). This is a browser-based interface to Amazon Alexa. Please ensure you read the limits of EchoSim on their website. For instance, it cannot stream music

For developing the breakfast ordering skill, I decided to purchase an Amazon Echo Dot. It’s a nice compact device, which doesn’t cost much and can run off any usb power source. For the Telstra Breakfast session, I actually ran it off my portable battery pack ūüėČ

Create an Amazon Account

Now that you have got yourself an Alexa enabled device, you will need an Amazon account to register it with. You can use one that you already have or create a new one. If you don’t have an Amazon account, you can either create one beforehand by going to https://www.amazon.com¬†or¬†you can create it straight from the Alexa app (the Alexa app is used to register the Amazon Echo device).

Setup your Amazon Echo Device

Use the Alexa app to setup your Amazon Echo device. When you login to the app, you will be asked for the Amazon Account credentials. As stated above, if you don’t have an Amazon account, you can create it from within the app.

Create an Alexa Developer Account

To create skills for Alexa, you need a developer account. If you don’t have one already, you can create one by going to https://developer.amazon.com/alexa. There are no costs associated with creating an Alexa developer account.

Just make sure that the username you choose for your Alexa developer account matches the username of the Amazon account to which your Amazon Echo is registered to. This will enable you to test your Alexa skills on your Amazon Echo device without having to publish it on the Alexa Skills Store (the skills will show under Your Skills in the Alexa App)

Create an AWS Free Tier Account

In order to process any of the requests sent to the breakfast ordering Alexa skill, we will make use of AWS Lambda. AWS Lambda provides a cheap and cost-effective way to run code due to the fact that you are only charged for the time that the code is run. There are no costs for any idle time.

If you already have an AWS account, you can use that otherwise, you can sign up for an AWS Free tier account by going to https://aws.amazon.com . AWS provides a lot of services for free for the first 12 months under the Free Tier, with some services continuing the free tier allowance even beyond the 12 months (AWS Lambda is one such). For a full list of Free Tier services, visit https://aws.amazon.com/free/

High Level Architecture for the Breakfast Ordering Skill

Below is the architectural overview for the Breakfast Ordering Skill that I built. I will introduce you to the various components over the next few blogs.Breakfast Ordering System_HighLevelArchitecture

In the next blog, I will take you through the Alexa Developer console, where we will use the Alexa Skills Kit (ASK) to start creating our breakfast ordering skill. We will define the invocation name, intents, slot names for our Alexa Skill. Not familiar with these terms? Don’t worry, ¬†I will explain them in the next blog. ¬†I hope to see you there.

See you soon.

 

Advertisements

Deploying an Active Directory Forest using AWS CloudFormation

Introduction

Wow, it is amazing how time flies. Almost two years ago, I wrote a set of blogs that showed how one can use Azure Resource Manager (ARM) templates and Desired State Configuration (DSC) scripts to deploy an Active Directory Forest automatically.

For those that would like to take a trip down memory lane, here is the link to the blog.

Recently, I have been playing with AWS CloudFormation and I am simply in awe by its power. For those that are not familiar with AWS CloudFormation, it is a tool, similar to Azure Resource Manager, that allows you to “code” your computing infrastructure in Amazon Web Services. Long gone are the days when you would have to sit down, pressing each button and choosing each option to deploy your environment. Cloud computing provides you with a way to interface with the fabric, so that you can script the build of your environment. The benefits of this are enormous. Firstly, it allows you to standardise all your builds. Secondly, it allows you to have a live as-built document (the code is the as-built document). Thirdly, the code is re-useable. Most important of all, since the deployment is now scripted, you can automate it.

In this blog I will show you how to create an AWS CloudFormation template to deploy an AWS Elastic Compute Cloud (EC2) Windows Server instance. The template will also include steps to promote the EC2 instance to a Domain Controller in a new Active Directory Forest.

Guess what the best part is? Once the template has been created, all you will have to do is to load it into AWS CloudFormation, provide a few values and sit back and relax. AWS CloudFormation will do everything for you from there on!

Sounds interesting? Lets begin.

Creating the CloudFormation Template

A CloudFormation template starts with a definition of the parameters that will be used. The person running the template (lets refer to them as an operator) will be asked to provide the values for these parameters.

When defining a parameter, you will provide the following

  • a name for the parameter
  • its type
  • a brief description for the parameter so that the operator knows what it will be used for
  • any constraints you want to put on the parameter, for instance
    • a maximum length (for strings)
    • a list of allowed values (in this case a drop down list is presented to the operator, to choose from)
  • a default value for the parameter

For our template, we will use the following parameters.

Next, we will define some mappings. Mappings allow us to define the values for variables, based on what value was provided for a parameter.

When creating EC2 instances, we need to provide a value for the Amazon Machine Image (AMI) to be used. In our case, we will use the OS version to decide which AMI to use.

To find the subnet into which the EC2 instance will be deployed in, we will use the Environment and AvailabilityZone parameters to find it.

The code below defines the mappings we will use

The next section in the CloudFormation template is Resources. This defines all resources that will be created.

If you have any experience deploying Active Directory Forests, you will know that it is extremely simple to do it using PowerShell scripts. Guess what, we will be using PowerShell scripts as well ūüėČ Now, after the EC2 instance has been created, we need to provide the PowerShell scripts to it, so that it can run them. We will use AWS Simple Storage Service (S3) buckets to store our PowerShell scripts.

To ensure our PowerShell scripts are stored securely, we will allow access to it only via a certain role and policy.

The code below will create an AWS Identity and Access Management (IAM) role and policy to access the S3 Bucket where the PowerShell scripts are stored.

We will use cf-init to do all the heavy lifting for us, once the EC2 instance has been created. cf-init is a utility that is present by default in EC2 instances and we can ask it to perform tasks for us.

To trigger cf-init, we will use the Userdata feature of EC2 instance provisioning. cf-init, when started, will check the EC2 Metadata for the credentials it will use, and it will also check it for all the tasks it needs to perform.

Below is the metadata that will be used. For simplicity, I have hardcoded the URL to the files in the S3 bucket.

As you can see, I have first defined the role that cf-init will use to access the S3 bucket. Next, the following tasks will be carried out, in the order defined in the configuration set

  • get-files
    • it will download the files from S3 and place them in the local directory c:\s3-downloads\scripts.
  • configure-instance (the commands in this section are run in alphabetical order, that is why I have prefixed them with a number, to ensure it follows the order I want)
    • It will change the execution policy for PowerShell to unrestricted (please note that this is just for demonstration purposes and the execution policy should not be made this relaxed).
    • next, the name of the server will be changed to what was provided in the Parameters section
    • the following Windows Components will be installed (as defined in the Add-WindowsComponents.ps1 script file)
      • RSAT-AD-PowerShell
      • AD-Domain-Services
      • DNS
      • GPMC
    • the Active Directory Forest will be created, using the Configure-ADForest.ps1 script and the values provided in the Parameters section

In the last part of the CloudFormation template, we will provide the UserData information that will trigger cfn-init to run and do all the configuration. We will also tag the the EC2 instance, based on values from the Parameters section.

For simplicity, I have hardcoded the security group that will be attached to the EC2 instance (this is defined as GroupSet under NetworkInterfaces). You can easily create an additional parameter for this, if you want.

Finally, our template will output the instance’s hostname, environment it has been created in and its privateip. This provides an easy way to identify the EC2 instance once it has been created.

Below is the last part of the template

Now all you have to do is login to AWS CloudFormation, load the template we have created, provide the parameter values and sit back and relax.

AWS CloudFormation will take it from here and do everything for you ūüėČ

How easy was that? Magic ūüôā

The complete CloudFormation template is available at https://gist.github.com/nivleshc/867b1a2ca119c7d22cf215b5a9a5de02

The two PowerShell Scripts that are used in the CloudFormation template can be downloaded using the links below

Add-WindowsComponents.ps1

Configure-ADForest.ps1

For anyone deploying an Active Directory Forest in AWS, I hope the above comes in handy.

Enjoy ūüėČ

Amazon QuickSight – An elegant and easy to use business analytics tool

Introduction

Recently, I had a requirement for a tool to visualise some data I had collected. My requirements were very simple. I didn’t want something that would cost me a lot, and at the same time I wanted the reports to be elegant and informative. Most of all, I didn’t want to have to go through pages and pages of documentation to learn how to use it.

As my data was within Amazon Web Services (AWS), I thought to check if AWS had any such offerings. Guess what, there was indeed a tool just for what I wanted, and after using it, I was amazed at how simple and elegant it is.

In this blog, I will show how you can easily get started with Amazon QuickSight. I will take you through the steps to import your data into Amazon QuickSight and then create some informative visualisations.

Some background on Amazon QuickSight

Pricing

Amazon QuickSight is very inexpensive, infact, if your data is not too much, you won’t have to pay anything!

For standard edition use, Amazon QuickSight provides 1GB of SPICE for the first user free per month. SPICE is an acronym for Super-fast, Parallel, In-memory, Calculation Engine and it uses a combination of columnar storage, in-memory technologies enabled through the latest hardware innovations, machine code generation, and data compression to allow users to run interactive queries on large datasets and get rapid responses.  SPICE is the calculation engine that Amazon QuickSight uses.

Any additional SPICE is priced at $USD0.25 per GB/month. For the latest pricing, please refer to https://aws.amazon.com/quicksight/#Pricing

Data Sources

Currently Amazon QuickSight supports the following data sources

  • Relational Data Sources
    • Amazon Athena
    • Amazon Aurora
    • Amazon Redshift
    • Amazon Redshift Spectrum
    • Amazon S3
    • Amazon S3 Analytics
    • Apache Spark 2.0 or later
    • Microsoft SQL Server 2012 or later
    • MySQL 5.1 or later
    • PostgreSQL 9.3.1 or later
    • Presto 0.167 or later
    • Snowflake
    • Teradata 14.0 or later
  • File Data Sources
    • CSV/TSV – (comma separated, tab separated value text files)
    • ELF/CLF – Extended and common log format files
    • JSON – Flat or semi-structured data files
    • XLSX – Microsoft Excel files

Unfortunately, currently Amazon DynamoDB is not supported as a native data source. Since my data is in Amazon DynamoDB, I had to write some custom lambda functions to export it to a csv file, so that it could be imported into Amazon QuickSight.

Ok, time for that walk-through I promised earlier.  For this blog, I will be using an S3 bucket as my data source. It will contain the CSV files that I will use for analysis in Amazon QuickSight.

Step 1 – Create S3 buckets

If you haven’t already done so, create an S3 bucket that will contain the csv files. The S3 bucket does not have to be publicly accessible. Once created, upload the csv files into the S3 bucket.

In my case, the csv file is called orders.csv and its location is https://s3.amazonaws.com/sample/orders.csv (to get the URL to your S3 file, login to the S3 console and navigate to the S3 bucket that contains the file. Click the S3 bucket to open it, then click the file name to open its properties. Under Overview you will see Link. This is the URL to the file)

Step 2 – Create an Amazon QuickSight Account

Before you start using Amazon QuickSight, you must create an account. Unfortunately, I couldn’t find a way for creating an Amazon QuickSight account without creating an Amazon AWS¬†account. If you don’t have an existing Amazon AWS account, you can create an AWS Free Tier account. Once you have got an AWS account, go ahead and create an Amazon QuickSight account at¬†https://aws.amazon.com/quicksight/.

While creating your Amazon QuickSight account, you will be asked if you would like Amazon QuickSight to auto-discover your Amazon S3 buckets. Enable this and then click to Choose S3 buckets. Choose the S3 bucket that you created in Step 1 above. This will give Amazon QuickSight read-only access to the S3 bucket, so that it can read the data for analysis.

Step 3 – Create a manifest file

A manifest file is a JSON file that provides the location and format of the data files to Amazon QuickSight. This is required when creating a data set for S3 data sources. Please refer to https://docs.aws.amazon.com/quicksight/latest/user/supported-manifest-file-format.html if you would like more information about manifest files.

Below is my manifest file, which I have affectionately named ordersmanifest.json.

{
   "fileLocations": [
      {
         "URIs": [
            "https://s3.amazonaws.com/sample/orders.csv"
         ]
      },
   ],
   "globalUploadSettings": {
      "format": "CSV",
      "delimiter": ",",
      "textqualifier": "'",
      "containsHeader": "true"
   }
}

Once created, upload the manifest file into the same S3 bucket as to where the csv file is stored.

Step 4 – Create a data set

  • Login to your Amazon QuickSight account. From the top right, click on¬†Manage data
  • In the next screen, click on¬†New data set
  • In the next screen, for Create a Data Set FROM NEW DATA SOURCES,¬†click on¬†S3
  • In the next screen
    • provide a name for the data source
    • for¬†Upload a manifest file ensure¬†URL is clicked and enter the URL to the manifest file (you can get the url by logging into the S3 console, and then clicking on the manifest file to reveal its properties. Under the¬†Overview tab, you will see¬†Link. This is the URL to the manifest file).NewS3DataSource
    • Click¬†Connect
    • Amazon QuickSight will now read the manifest file and then import the csv file to SPICE. You will see the following screenFinishDataSetCreation
    • Click on¬†Edit/Preview data.
    • In the next screen, you will see the contents of the data file that was imported, along with the¬†Fields¬†name on the left. If you want to exclude any columns from the analysis, simply untick them (I unticked orderTime (S)¬†since I didn’t need it)¬†EditPreviewDataSet
    • By default, the data is called¬†Group 1. To customise the name, replace¬†Group 1 with a text of your choice (I have renamed my data to¬†Orders Data)RenameGroup1Label
    • Click¬†Save & visualize from the top menu

Step 5 – Create Visualisations

Now that you have imported the data into SPICE, you can start analysing it and creating visualisations.

After step 4, you should be in the Analysis section.

  • Depending on which visualisation you want, you can select the respective type under¬†Visual types from the bottom left hand side of the screen. For my visualisations, I chose Pie Chart (side note¬†– you will notice that¬†orderTime (S)¬†¬†isn’t listed under¬†Fields list. This is¬†because we had¬†unticked it in the previous screen)OrdersDataAnalysis-01
  • I want to create two Pie Charts, one to show me analysis about what is the most popular foodName and another to find out what is the most popular drinkName. For the first Pie Chart, drag foodName¬†(S) from the Fields list to¬†the¬†Value – Add a measure here box¬† in the top of the screen. Then drag¬†foodName (S)¬†from the¬†Fields list to the¬†Group/Color – Add a dimension here box in the top of the screen. You will see the followingOrdersDataAnalysis-02
  • You can customise the visualisation title Count of Foodname (S) by Foodname (S)¬†by clicking it and then changing the text (I have changed the title to¬†Popularity of Food Types)FoodNamePopularity
  • If you look closely, the legend on the right hand side doesn’t serve much purpose since the pie slices are already labelled quite well. You can also get rid of the legend and get more space for your visual. To do this, click on the down arrow above¬†FoodName (S) on the right and then select¬†Hide legend¬†FoodNameHideLengend
  • Next, lets create a Pie Chart visualisation for drinkName. From the top menu, click on¬†Add and then¬†Add visual¬†drinkNameAddVisual
  • You will now have another Canvas at the bottom of the first Pie Chart. Click this new canvas area to select it (a blue border will appear to show that it is selected). From Visual types at the bottom left hand side, click on the Pie Chart visual. Then from the top, click on¬†Field wells to expose the¬†Value and¬†Group/Color boxes for the second canvas¬†drinkNameCanvas
  • From the Field list¬†on the left,¬†drag drinkName¬†(S) to¬†the¬†Value – Add a measure here box¬† in the top of the screen. Then drag¬†drinkName (S)¬†from the¬†Fields list to the¬†Group/Color – Add a dimension here box in the top of the screen. You will now see the following¬†foodanddrinkvisual
  • We are almost done. I actually want the two Pie Charts to sit side by side, instead of one ontop ofthe other. To do this, I will show you a neat trick. In each of the visuals, at the bottom right border, you will see two diagonal lines. If you move your mouse pointer over them, they change to a resizing cursor. Use this to resize the visual’s canvas area. Also, in the middle of the top border of the visual, you will see two rows of gray dots. Click your mouse pointer on this and drag to the location you want to move the visual to.VisualResizeandMove
  • I have hidden the legend for the second visual, customised the title and resized both the visuals and moved them side by side. Viola! Below is what I get.¬†Not bad aye!BothVisualsSidebySide

Step 6 – Create a dashboard

Now that the visuals have been created, they can be shared it with others. This can be done by creating a dashboard. A dashboard is a read-only snapshot of the analysis. When you share the dashboard with others, they can view and filter the dashboard data, however any filters applied to the dashboard visual exist only when the user is viewing the dashboard, and aren’t saved once it is closed.

One thing to note about sharing dashboards – you can only share dashboards with users who have an Amazon QuickSight account.

Creating a dashboard is very easy.

  • In the Analysis screen, on the top right corner, click on¬†Share and then select¬†Create dashboardCreateDashboard
  • You can either replace an existing dashboard or create a new one. In our case, since we are creating a new dashboard, select¬†Create a new dashboard as and enter a name for the dashboard. Once finished, click¬†Create dashboardCreateDashboard-Name
  • You will then be asked to enter the username or email address of those you want to share the dashboard with. Enter this and click on¬†Share¬†ShareDashboard
  • That’s it, your dashboard is now created. To access it, go to the Amazon QuickSight home screen (click on the Amazon QuickSight icon on the top left hand side of the screen) and then click on¬†All dashboards. Those that you have shared the dashboard with will also be able to see it once they login to their Amazon QuickSight account.AllDashboards

Step 6 – Refreshing the Data Set

If your data set continually changes, your visualisations/dashboards will not show the updated information. This can be done by refreshing the data set. Doing this will import the new data into SPICE, which will then automatically update the analysis/visualisations and dashboards

Note: you will have to manually reload the webpage to see the updated visualisations and dashboard

There are two ways of refreshing data sets. One is to do it manually while the other is to use a schedule. The scheduled data refresh allows for the data to be automatically refreshed at a certain time daily, weekly or monthly. A maximum of five scheduled refreshes can be configured.

The steps below show how you can manually refresh the data or create schedules to refresh the data

  • From the Amazon QuickSight main screen, click on¬†Manage data from the top left of the screen¬†ManageData
  • In the next screen, you will see all your currently configured data sets. Click the¬†Orders Data dataset (this is the one we had created previously).
  • In the next screen, you will see¬†Refresh Now and¬†Schedule refreshManualScheduleDataRefresh
  • Clicking on¬†Refresh Now will manually refresh the data. Clicking on¬†Schedule refresh will bring up the screen where you can configure a schedule for refreshing the data automatically.

 

That’s it folks! Wasn’t that simple? If you already have an Amazon AWS account, I would strongly recommend giving Amazon QuickSight¬†a try for all your analytics needs. Even if you don’t have an Amazon AWS account, I would still suggest getting an AWS free tier account to try it out.

Enjoy ūüėČ