Publish Your Amazon Sumerian Scenes Privately using AWS Amplify

Background

In my previous blog, I listed the steps for creating a virtual life insurance agent using Amazon Sumerian. The follow-on step is to publish the Amazon Sumerian scene, to make it accessible to your audience.

There are two options for publishing – either publish it publicly or privately. The difference? When you publish publicly, anyone with the link to the Amazon Sumerian scene can access it. If your Amazon Sumerian scene uses sensitive intellectual property artefacts, then you shouldn’t use this method. Also, be mindful that Amazon Sumerian costs increase with the number of views. If you intend on keeping your costs low, and there is no justifiable reason to make your Amazon Sumerian scene public, then its best not to publish it publicly (unless you are building a virtual life insurance agent, in which case you definitely should).

When you publish your Amazon Sumerian scene privately, you can control who gets access to it. This solution leverages Amazon Cognito, to add an authentication gate, which you control. Without a valid user account, people will not be able access the published Amazon Sumerian scene.

In this blog, I will demonstrate how to publish an existing Amazon Sumerian scene privately. I will be using the Amazon Sumerian scene that I created in my previous blog.

Prerequisites

Before continuing, ensure that you have the following software installed on your computer. If not, please follow the respective links to get them installed.

Implementation

With the prerequisites installed, let continue on to the implementation steps.

  1. Login to your AWS Management Console and open the Amazon Sumerian Console. Change to the AWS Region where the Amazon Sumerian scene you want to publish is located.
  2. From the Amazon Sumerian Console, click on the Amazon Sumerian scene that you want to publish, to open its configuration screen.
  3. From the top-right corner, click on Publish and then choose Host Privately.

  4. In the next screen, click Publish. Warning – this part takes a few seconds to complete. Once completed, click Download JSON configuration and save the file (this file will be used later).
  5. Next, we will create a react app. This app will be used to publish the Amazon Sumerian scene. We will then add AWS Amplify libraries to the react app, to convert it into an AWS Amplify app.
    Open a command prompt screen (or terminal session on your Mac) and type the following commands to create a react app in a folder called lifeInsurance-sumerian-app
    npx create-react-app lifeinsurance-sumerian-app
    cd lifeinsurance-sumerian-app
    npm install aws-amplify aws-amplify-react --save
  6. Next, we must initialise the AWS Amplify app. To do this, run the following commands in the same command prompt screen as above (you can use the responses I have provided or tailor them to your specific requirements).
    amplify init
    Enter a name for the project LifeInsuranceHostApp
    Enter a name for the environment dev
    Choose your default editor: Visual Studio Code
    Choose the type of app that you're building javascript
    What javascript framework are you using react
    Source Directory Path:  src
    Distribution Directory Path: build
    Build Command:  npm run-script build
    Start Command: npm run-script start
    Do you want to use an AWS profile? Yes
    Please choose the profile you want to use aws-account-profile
  7. Next, we must add the xr component to our AWS Amplify app. This is required to host the Amazon Sumerian scene. In the previous command prompt screen, type the following:
    amplify add xr

    As we already downloaded the Amazon Sumerian scene’s JSON configuration file, that specific step can be skipped.

    Provide a name for the scene: LifeInsuranceHost
    Enter the path to the downloaded JSON configuration file for LifeInsuranceHost: <provide the full path to the JSON configuration file that was downloaded>

    AWS Amplify CLI will now start adding the XR component to your project. Wait for it to complete successfully.

    At the next prompt, answer No to allow only authenticated users access to the Sumerian project.

    Would you like to allow unauthenticated users access to the Sumerian project for this scene? No

  8. To create the resources in AWS, run the following command:
    amplify push

    Before deploying, the AWS Amplify CLI will display the changes it will perform and ask for confirmation before continuing. (the components Auth and Xr should be listed as those that will be created). At the prompt Are you sure you want to continue? Press Y and then press enter.

    The provisioning of the AWS resources will now start. Wait for this to complete.

    Sidenote:
        When I ran amplify push, I got the following error:
    
        Failed with error
        Resource Name: UpdateRolesWithIDPFunction (AWS::Lambda::Function)
        Event Type: create
        Reason: The runtime parameter of nodejs8.10 is no longer supported for creating or updating AWS Lambda functions. We recommend you use the new runtime (nodejs12.x) while creating or updating functions. (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: 3dadbcbdhdhd)
    

    The fix for this was quite straight forward. I just had to update the AWS Amplify CLI and re-run the command. If you face the same issue, you can read this article on how to upgrade/install your AWS Amplify CLI https://docs.amplify.aws/cli/start/install

  9. Next, we must update the Auth module so that it creates an Amazon Cognito User Pool. This will be used to authenticate users, before they can access  the AWS Amplify app. Run the following commands:
    amplify update auth
    What do you want to do? Apply default configuration without Social Provider (Federation)
    
  10. Push the changes to AWS with the following command:
  amplify push

When asked to confirm that you want to continue, press Y and then press enter. Wait for the command to complete before continuing.

    11. Now, we must modify the default react application, so that it displays the Amazon Sumerian scene. From the src folder (inside your react app directory), open App.js and replace its contents with the below:

import React, { Component } from 'react';
import './App.css';
import { withAuthenticator, SumerianScene } from 'aws-amplify-react';
import Amplify from 'aws-amplify';
import awsmobile from './aws-exports';
import '@aws-amplify/ui/dist/style.css';
Amplify.configure(awsmobile);
class App extends Component {
render() {
return (
<div style={{ height: '100vh' }}>
<SumerianScene sceneName='LifeInsuranceHost' />
</div>
);
}
};
export default withAuthenticator(App, { includeGreetings: true });

where sceneName is the name that you had provided when you ran the command amplify add xr.

The withAuthenticator is a react High-Order-Component (HOC) which forces the user to authenticate with Amazon Cognito before they can access the Amazon Sumerian scene.

12. Let’s quickly test the react app. From the command prompt screen, run the following command:

    npm start

The above will start the react app locally on port tcp 3000 and then it will open https://localhost:3000 in your Internet browser. You should see a screen similar to  below

I want to take this opportunity to call out something important. At this stage, even though you don’t have an account to login to the app, you can still click Create account to create a user account for yourself, which you can then use to login! In my personal opinion, this “feature” must be disabled, otherwise, you won’t be able to control who gets access to your Amazon Sumerian scene. Only you should be allowed to create (and manage) user accounts.

Let’s fix this quickly.

Login to your AWS Management Console and open the AWS Amplify console. Change to the AWS Region where the AWS Amplify app was deployed. Find the AWS Amplify app that you just deployed (this will be listed under All apps). Click on it to open its configuration.

In the right-hand side screen, click Backend environments.

Under Categories added, click Authentication.

In the next screen, under the Authentication tab, you should see Users and Federated Identities. Make a note of the value under Name in each of these sections.

Click View in Cognito beside Users. This will open the Amazon Cognito User Pools Portal. If you get a Region not supported message, click on the appropriate Switch to <region> that is listed underneath the warning.

From the list of User Pools displayed, click on the name you saw earlier in the AWS Amplify portal.

In the next screen, in the left-hand side menu, ensure General settings is selected. In the right-hand side panel, the fourth section will look like below. Notice that for User sign ups allowed? It says Users can sign themselves up. This must be changed.

Click on the pencil at the far right in this panel to open the settings.

In the next screen, under Do you want to allow users to sign themselves up? Select Only allow administrators to create users and click Save changes

You will be returned to the User Pool’s configuration screen.

Before continuing, let’s create a user that will be used to login to the AWS Amplify hosted website. From the left-hand side menu, click Users and groups (under General settings). Then, in the right-hand side pane, click Create user and provide the required information. Once completed, click Create user at the bottom.

13. Let’s continue. From the top left-hand side, click on Federated Identities.

Click on the Federated Identity that was displayed in the AWS Amplify screen previously. Then from the top right-hand side screen, click Edit identity pool.

In the next screen, in the right-hand side pane, note down the value for Authenticated role. This is the role that a user will assume once they have successfully authenticated with Amazon Cognito. We need to add additional Identity and Access Management policies to it.

14. Open the Identity and Access Management (IAM) Portal. Locate the IAM role that you just saw in Federated Identities settings. Attach the following additional managed polices to it.
AmazonPollyReadOnlyAccess
AmazonLexRunBotsOnly
15. Next, we will enable this react app to be hosted by AWS Amplify. To do this, run the following commands:
amplify add hosting

Use the following responses:

Select the plugin module to execute: Hosting with Amplify Console (Managed hosting with custom domain , Continuous deployment)
Choose a type: Manual deployment
16. Next, the changes must be pushed to AWS. Run the following command to get AWS Amplify to create the resources in AWS:
amplify publish

Before AWS Amplify CLI starts the deployment, it will display the AWS resources that will be created or updated and will ask for confirmation before continuing (notice the category Hosting has a status of Create under Operation). Press Y and then press enter.

After the deployment completes, you will be provided with a URL for your AWS Amplify Console hosted website. Record this URL.

17. Open your Internet browser (Chrome is highly recommended) and enter the URL that was provided in the previous step. When prompted, use the credentials for the Amazon Cognito User Pool user that was created. Once authenticated, you should be able to access your Amazon Sumerian Scene (the Amazon Sumerian Virtual Life Insurance Agent in this case).

Thats it! Wasn’t that easy? To give access to users, just create their user accounts in the Amazon Cognito User Pool that was created, and provide them with the website URL. To remove access, you can disable and then delete the respective Amazon Cognito user account.

I hope you found the above helpful. Till the next time, Enjoy!

Use Amazon Sumerian Hosts To Make Your Amazon Lex Bots More Approachable

Background

Chatbots are great. They help alleviate long customer support queues by enabling customers to self-service common issues. This increases customer satisfaction and reduces the businesses expenditure in staffing customer support centres. Chatbots are also available 24×7, which gives customers the freedom to use them at a time convenient to them.

However, in my opinion, chatbots (be they web or phone bots) sometimes can fall short of their intended duty. This can be felt in areas where customers are used to a face-to-face conversation. It can be argued that those areas that don’t require face-to-face interactions, can still benefit with having one. Customers feel better when they are conversing with humans and in this world of ever-increasing automation, sometimes this can be the reason why a person would buy something from your company than somewhere else. Unfortunately, it is not economically feasible for a business to increase their customer support personnel count.

However, for areas where face-to-face interaction with customers would be a better approach, you could use human-like avatars, instead or web chatbots!

Amazon Sumerian enables you to create virtual hosts that look almost human. These can be used to interact with your customers, providing a more human-like feeling. These virtual hosts can connect to your Amazon Lex bots, enabling a more human-like representation of your bots. Also, just like Amazon Lex bots, Amazon Sumerian hosts can scale to handle the increase in customer calls.

In this blog, I will extend my Life Insurance Amazon Lex bot (https://nivleshc.wordpress.com/2020/06/01/create-an-omnichannel-chatbot-using-amazon-lex-and-amazon-connect/), by adding an Amazon Sumerian host to it. This will enable me to serve the same solution, however by using a human-like avatar instead of a faceless chatbot.

High Level Architecture Diagram

Below is a high-level architecture diagram for the entire solution referenced in this blog. However, in this blog, we will be just building the components within the green box.

All the other components were built as part of my previous blogs. Here is the link to the most recent blog that provides instructions on how to build the components outside of the green box https://nivleshc.wordpress.com/2020/06/01/create-an-omnichannel-chatbot-using-amazon-lex-and-amazon-connect/

If you intend on following the instructions in this blog, you don’t need all the previous components in place. At a minimum, you require a working Amazon Lex bot. If you don’t have one, you can follow this blog to create one https://nivleshc.wordpress.com/2020/04/08/create-a-web-chatbot-for-generating-life-insurance-quotes-using-amazon-lex/

Note: To keep this blog at an acceptable length, it will not include instructions on how to publish the Amazon Sumerian scene either via a public or private endpoint. However, you will still be able to test your Amazon Sumerian scene by using the play buttons within the Amazon Sumerian Portal.

Steps 1 – 12 are discussed in https://nivleshc.wordpress.com/2020/06/01/create-an-omnichannel-chatbot-using-amazon-lex-and-amazon-connect/

Steps 13 – 16 are described below:

13. The user interacts with the Amazon Sumerian host via the website it is hosted on.

14. Amazon Sumerian acts as a proxy, passing the user input to the Amazon Lex bot.

15. The Amazon Lex bot processes the user’s input and provides its response to Amazon Sumerian.

16. The Amazon Sumerian host then relays the response to the user.

As a teaser, to showcase what you can expect after following the instructions in this blog, I have uploaded a recording of the Amazon Sumerian host I created to proxy my Life Insurance Amazon Lex bot. Here is the link to the video https://youtu.be/9P-xVd5myDo

Prerequisites

As mentioned above, this blog will be extending the solution that was created in https://nivleshc.wordpress.com/2020/06/01/create-an-omnichannel-chatbot-using-amazon-lex-and-amazon-connect/. While it would be beneficial to have all the components from that blog, at a minimum, you must have a working Amazon Lex bot running in the same AWS Region as to where the Amazon Sumerian host will be created. If you don’t have one, you can use the instructions in this blog to create an Amazon Lex bot https://nivleshc.wordpress.com/2020/04/08/create-a-web-chatbot-for-generating-life-insurance-quotes-using-amazon-lex/.

Implementation

Eager to start? Let’s begin.

  1. Firstly, we must create an Amazon Cognito Identity Pool. This will be used by the Amazon Sumerian scene to access AWS services within the AWS Account (for this blog, we need access to Amazon Polly and Amazon Lex).

    At the time of writing, only Amazon Cognito Identity Pools in AWS Regions us-east-1, us-wast-2, eu-west-1 support Amazon Lex. If you use an Amazon Cognito Identity with Amazon Sumerian created in any other AWS Region, you will get an error message similar to below:

    For this blog, all resources (including the Amazon Lex bot) will be created in the North Virginia (us-east-1) AWS Region.

    Login to the Amazon Management Console and then open the Amazon Cognito portal. Change to the US East (North Virginia) AWS Region.

  2. Click Manage Identity Pools and then click Create new identity pool.
  3. In the next screen do the following:
    1. Give the identity pool a descriptive name (for example AmazonSumerianIdentityPool)
    2. Under Unauthenticated identities tick Enable access to unauthenticated identities.
    3. Click Create Pool.
  4. In the next screen, click on the arrowhead beside View Details. We will create new Identity and Access Management (IAM) Roles for this Amazon Cognito Identity Pool, ensure that for both the authenticated and unauthenticated identities, beside IAM Role the option Create a new IAM Role is selected. Note down the Role Names as we will require this later. Click Allow.
  5. In the next screen, note down the Identity pool ID (this is shown in the right-hand side screen, under Get AWS Credentials).
  6. Next, we need to modify the newly created IAM Roles. Open the IAM portal and search for the roles.
  7. Attach the following additional IAM Policies
    1. AmazonPollyReadOnlyAccess
    2. AmazonLexRunBotsOnly
  8. Once done, go to the Amazon Sumerian portal and select the same AWS Region as the Amazon Cognito Identity Pool that was just created.
  9. From the left-hand side menu, click Home and then from the right-hand side screen, click Create new scene. When prompted, give an appropriate name for the scene (for example LifeInsuranceHost) and then click Create.
  10. Once the scene has loaded, in the top left-hand side, under Entities click the root entity (this will correspond to the scene name you gave). Then in the far right-hand side panel, expand AWS Configuration section. Then under Cognito Identity Pool ID enter the identity pool ID from above.

  11. Next, we will add a background to the scene. From the top middle of the screen click Import Assets. In the next screen, in the top right-hand side search box type Blue Skysphere. When it is shown, click on it to reveal its properties (shown in the far right-hand side panel). Then click Add to add it to your scene’s Assets.
  12. Next, click on the root entity to reveal its properties in the far right-hand side panel. Expand Environment and then expand Skybox. From under Assets (in the far left-hand side panel) drag Blue Skysphere from under Blue SKysphere to Drop Skybox under Skybox in the far right-hand side panel.

  13. Now, lets add a Sumerian host. Click Import Assets from the top middle of the screen. In the next screen, delete anything you might have in the search box. This will then display all assets. Find a Sumerian host of your choice (for example Cristine Hoodie) and click on it to reveal its properties (displayed in the far right-hand side panel). Click Add to add it to your scene’s Assets.
  14. Once the Sumerian host has been added to your Assets, (if not already expanded) click on the arrowhead in front of the Sumerian host’s name to expand it. Then from underneath, click the component that has the Sumerian host’s name with a hexagon prefix. Drag this into the scene to add it.

    After the Sumerian host finishes loading, you will be able to see it within the scene.

  15. Let’s customise the Sumerian host. Click on the displayed Sumerian host (or click on the Sumerian hosts name from under the root entity) to display its properties in the far right-hand side panel.
  16. To make the Sumerian host follow you with its gaze, expand Host in the right-hand side panel. Under Point of Interest set Point of Interest to Look at Entity. Then drag Default Camera from under Entities (located in the far left-hand side panel) to Drop entity beside Target Entity.
  17. To move the Sumerian host to any other part of the scene, use the appropriate arrows shown beside the Sumerian host.
  18. Next, let’s give our Sumerian host the ability to speak and to use gestures. For speech, we must create a speech file. With the Sumerian host’s properties still visible (in the far right-hand side panel) expand Speech. Choose the appropriate voice (for example Salli). Under Speech Files click the + symbol. This will bring up the Sumerian text editor. Write an appropriate welcome message in the right-hand side panel. In the top left-hand side, under Documents hove over the filename corresponding to this speech till you see a pen symbol. Click on the pen symbol to rename the file. Give it an appropriate name (for example intro_speech). Click Save (located at the bottom) and close the Sumerian text editor screen. You should now see the newly created speech filename under Speech Files (in the far right-hand side panel).
  19. To enable the Sumerian host for gestures, expand Gesture Map and then click +. This will open the Sumerian text editor, automatically creating a file called DefaultGestureMap. You can manually add SSML gesture tags to your speech file or use this gesture map to automatically parse your speech file and have the appropriate gesture SSML tags added (if you look at the DefaultGestureMap contents, you will see what words are mapped to each gesture). We will use the automatic method. Close the Sumerian text editor. Under Speech Files locate the speech file that you had created and click the icon beside it that looks like a person with wings. This will automatically add the gesture SSML directives to your speech file, based on the DefaultGestureMap.

    Note: The speech file and DefaultGestureMap file will be displayed under Assets (bottom far left-hand side panel).

  20. Next, we will attach the Sumerian host to our Amazon Lex bot. In The far right-hand side panel, click on Speech to collapse that section. Click Add Component (located at the bottom) and then choose Dialogue. Click Dialogue to expand it. Under Bot for Name type the name of the bot (it has to be exactly as it appears in the Amazon Lex portal). For Alias type $LATEST (this refers to the latest version of the bot). If you are unsure of the bot name, click on Open AWS Console. This will open the Amazon Lex portal.

  21. We are almost done. Next, we need to teach our Sumerian host how to interact with a user. In our case, when the Amazon Sumerian scene loads, we want the Sumerian host to play the introduction speech file we created. Then we want the Sumerian host to ask the user questions and then pass the users response to the Amazon Lex bot. The response from the Amazon Lex bot will then be communicated to the user by the Amazon Sumerian host. To do all this, we need to use a state machine. We will then create different states, each corresponding to the actions we want done. We will then configure transitions from one state to the another.

    From the far right-hand side panel, click State Machine to expand it. Click + beside Drop Behaviour. The right-hand side panel will now show a new behaviour section. Expand the Details section and change Name to something more descriptive (for example GenerateQuote).

    You will notice that there is a box in the bottom middle screen with the title State 1. Double click it so that its properties are shown in the far right-hand side panel. Change the Name to Introduce and click Add Action. In the next screen, search for Start Speech and click Add. In the far right-hand side panel, beside Speech choose the speech file that you had created earlier.

    Next, we need to create some more states. To create a new state, click Add State in the bottom middle screen. Create the following states with the mentioned actions.

  • Name: Wait for Key Press

    Action: Key Down

    Key: a

    I use “a” instead of the space bar as some other tutorials recommend. This is because when you press the space bar in Chrome browser, this toggles the microphone from on to off, which breaks the conversation flow with the Sumerian host

  • Name: Start Recording     [this state has two actions]

    Action: Key Up

    Key: a

    the key used in this has to match the key used in Wait for Key Press above

    Action: Start Microphone Recording this state has two actions

  • Name: Stop Recording

    Action: Stop Microphone Recording

  • Name: Send to Amazon Lex Bot

    Action: Send Audio Input to Dialogue Bot

    Tick both Log user input and Log user response (this helps in troubleshooting errors)

  • Name: Play Response

    Action: Start Speech

    Speech: don’t select anything for this

    Tick Use Lex Response (instead of reading out a speech file, this directs the Sumerian host to read the response from the Amazon Lex bot)

23.  Connect the state boxes as per the screenshot below (click on the bottom part of each state box and then drag it to the title of the next state box). When connecting Send to Amazon Lex Bot and Play Response make sure to click On Response Ready and not the whole state box.

24. That’s it! Your Sumerian host is now. To run it, click the play button at the bottom of the screen. To pause use the pause button and to stop the Sumerian Host, use the stop button. The Amazon Sumerian host will play the introduction speech file and will then play as a proxy between the user and the Amazon Lex bot. When you move the Amazon Sumerian host using your mouse, it should follow you with its head and eyes. Pay close attention to the blinking eyes, hand gestures and slight movements. Pretty impressive isn’t it?

   25. If you are having any issues, you can use your browser’s console to troubleshoot. All messages, including errors should be logged there.

I hope you found this blog useful. Till the next time, Enjoy!

Migrate An Amazon Lex Bot From One AWS Account To Another AWS Account

Background

I recently had the need to migrate an Amazon Lex bot from one AWS Account to another. Assuming this to be a trivial task, requiring a configuration export from the source account and an import into the destination account, I was up for the task. Unfortunately, reality turned out to be quite different!

The reason? The Amazon Lex bot I intended to migrate was using an AWS Lambda function in its fulfillment. As I found out, Amazon Lambda functions are not included in the exported Amazon Lex bot configuration files. To make matters worse, even if you create the AWS Lambda function in the destination AWS Account, the import still failed as it was trying to access the AWS Lambda function in the source AWS Account!

In this blog I will provide instructions on how to successfully migrate an Amazon Lex bot that is using an AWS Lambda function for fulfillment, from one AWS Account to another. The same steps can be used to migrate Amazon Lex bots from one AWS Region to another AWS Region, within the same AWS Account.

Some additional information

When you create Amazon Lex bots, you can have one of the following fulfillment configurations:

  • (Scenario 1) Return parameters to client
  • (Scenario 2) AWS Lambda function

For the first configuration, migration is simple. Unfortunately for the second configuration, the migration is a bit more involved.

In this blog, I will cover migration patterns for both above-mentioned Amazon Lex bot configurations.

Scenario 1 – Migration Approach

The migration approach for this scenario is straight forward. Here are the steps:

  1. Login to the AWS Management Console and then go to the Amazon Lex portal.
  2. Change to the AWS Region where the Amazon Lex bot (the one that you want to migrate) is located.
  3. Then from the left-hand side menu, click Bots.
  4. From the right-hand side screen, click the bot that you want to export. This will open its configuration.
  5. From the top, click on the Settings tab. Then from the left-hand side menu, click General.
  6. In the right-hand side screen, browse to the IAM role and ensure it is AWSServiceRoleForLexBots. This is the default role for Amazon Lex bots. We need to confirm that no additional policies have been attached to this. Click the IAM role to open it.
  7. The Identity and Access Management portal will now open, showing the policies attached to the AWSServiceRoleForLexBots role. Confirm that only the AmazonLexBotPolicy IAM policy is attached. If there are any additional policies attached, note them down.
  8. Return to the Amazon Lex portal and select the bot that you want to migrate. From Actions click Export (Ensure your Amazon Lex bot has been published, otherwise you won’t be able to export it).
  9. In the next screen, choose the version of the bot you want to export and the platform you will be exporting to. Since the destination will be an Amazon Lex bot, choose Amazon Lex. Click Export.
  10. When prompted, provide a location to store the zip file containing the exported Amazon Lex bot configuration. Viola! The settings have now been successfully exported.
  11. To import, login to the AWS Management Console of the destination AWS Account and then go to the Amazon Lex portal.
  12. Change to the AWS Region where you want to import the Amazon Lex bot to.
  13. From the left-hand side menu, click Bots.
  14. Then from the right-hand side screen, click on Actions and choose Import.
  15. In the next screen, for Upload file browse to the configuration file that you had exported from the source AWS Account (listed above). Click Import.
  16. Once the import has finished, the bot will be displayed in the Amazon Lex portal.
  17. If the source Amazon Lex bot’s IAM role had additional policies attached, open the IAM role for the destination Amazon Lex bot and attach these additional IAM policies.
  18. Proceed to Build and then Publish the imported Amazon Lex bot.

You should now have a successfully migrated Amazon Lex bot in the destination account. Go through the configuration to confirm the settings and test it to ensure it behaves as expected. If you don’t need the source Amazon Lex bot, this can now be safely deleted.

Scenario 2 – Migration Approach

In my opinion, Scenario 1 is used mostly when you are learning about Amazon Lex bots. As you get more experienced with it and are ready to create bots for real-world applications, you will require an AWS Lambda function for fulfillment. Unfortunately, with such sophisticated Amazon Lex bots, the migration process is a bit more involved.

Here are the steps:

  1. Login to the AWS Management Console and then go to the Amazon Lex portal.
  2. Change to the AWS Region where the Amazon Lex bot (the one that you want to migrate) is located.
  3. Then from the left-hand side menu, click Bots.
  4. From the right-hand side screen, click the bot that you want to export. This will open its configuration.
  5. In the screen that is shown, ensure the Editor tab is selected.
  6. In the left-hand side screen, under Intents click on each intent that is listed, and note the Fulfillment configuration. If this is set to AWS Lambda function then export that AWS Lambda function’s configuration (Exporting AWS Lambda function configuration is out-of-scope for this blog, however as a guide, in the Editor tab of the Amazon Lex bot configuration, click on the View in Lambda console link underneath the Lambda function drop-down list. This will open the AWS Lambda function. Then from Actions click on Export function. You will be presented with two options – Download SAM file or Download deployment package. Based on your choice, you can then import the AWS Lambda function in the destination AWS Account using either AWS CloudFormation or AWS SAM).
  7. Go back to the Amazon Lex bot’s configuration. From the top, click on the Settings tab. Then from the left-hand side menu, click General.
  8. In the right-hand side screen, browse to the IAM role and ensure it is AWSServiceRoleForLexBots. This is the default role for Amazon Lex bots. We need to confirm that no additional policies have been attached to this. Click the IAM role to open it.
  9. The Identity and Access Management portal will now open, showing the policies attached to the AWSServiceRoleForLexBots role. Confirm that only the AmazonLexBotPolicy IAM policy is attached. If there are any additional policies attached, note them down.
  10. Go back to the Amazon Lex portal and in the right-hand side screen select the bot that is to be migrated.
  11. From Actions click Export (Ensure your Amazon Lex bot has been published, otherwise you won’t be able to export it).
  12. In the screen that comes up, select the version of the bot to export and the platform to export it to. Since the destination will be an Amazon Lex bot, for platform choose Amazon Lex. Click Export.
  13. When prompted, provide a location to store the zip file containing the exported Amazon Lex bot configuration. The settings have now been successfully exported.
  14. Login to the destination AWS Account and create the AWS Lambda functions that were exported in step 6 above. Ensure these are created in the same AWS Region as to where the migrated Amazon Lex bot will reside.
  15. Using the AWS Lambda portal, note down the ARN of all the AWS Lambda functions that were just created.
  16. As you might have guessed, the challenge with using the exported Amazon Lex bot configuration is that it still refers to the source account’s AWS Lambda functions. Due to this (unless you have cross-account permissions setup for AWS Lambda functions – however this is not what we want) the Amazon Lex bot import will fail. So, before we import the exported configuration, we need to update it so that it points to the newly created AWS Lambda functions.
  17. The exported configuration file is a zip file. Using an archiving utility such as winrar or winzip, unzip the exported configuration file. This will generate a JSON file containing the configuration.
  18. Open the JSON file in a code editor such as Microsoft Visual Studio Code.
  19. Locate the intents block. Inside this, find the fulfillmentActivity section. Inside this section, locate the uri key (underneath codeHook). The value for this uri key is the ARN of the AWS Lambda function that is being used by this intent for fulfilment. Update this uri value with the ARN of the AWS Lambda function that was just created in the destination AWS Account. Locate any other intents and update their uri values with the appropriate AWS Lambda function ARNs. Once done, save the JSON file.
  20. Pro Tip: If you want the name of the Amazon Lex bot in the destination AWS Account be different from the source Amazon Lex bot, change the value of the name key in the resource block of the JSON file.
  21. Using an archive utility (such as winrar or winzip) archive the updated JSON file into a zip file.
  22. Next, we need to grant invoke permissions to the Amazon Lex bot’s intent for the newly created AWS Lambda functions. If this is not in place, the import process will fail. There are two ways of achieving this. One is via the AWS Management Console, the other via AWS CLI. I will list both options in the next steps.
  23. Using AWS Management Console – In the destination AWS Account, go the Amazon Lex portal (ensure it is in the correct AWS Region) and create an Amazon Lex bot with the same name as the source Amazon Lex bot (this is just a dummy Amazon Lex bot). Then create an intent inside this Amazon Lex bot, give it the same name as the intent in the source Amazon Lex bot. Then change the fulfillment for this intent to AWS Lambda function and choose the appropriate newly created AWS Lambda function from the drop-down list. A message will be displayed stating that this will grant the intent invoke permissions on the AWS Lambda function. Click to confirm. It is extremely important that the dummy Amazon Lex bot name and the dummy intent name match the source Amazon Lex bot name and the intent.
  24. Using AWS CLI – use the following command in the destination AWS Account
    aws lambda add-permission --function-name <lambda-function-name> --action lambda:InvokeFunction --statement-id <statement-id-label> --principal lex.amazonaws.com --source-arn <intent-arn> --region <region>
    

    where

    lambda-function-name is the name of the AWS Lambda function for which the Amazon Lex bot intent will be granted invoke permissions.

    statement-id-label is just a label for this permission policy (shows as the value for sid).  As a good practice you can use the format lex-<region>-<intent-name> (for example lex-us-east-1-StartConversation).

    intent-arn is the ARN of destination Amazon Lex bot’s intent. However, as this doesn’t currently exist , you won’t be able to lookup this value. However, the ARN has the following format. Replace the placeholder variables with the appropriate values arn:aws:lex:<region>:<account-id>:intent:<intent-name>:* . Don’t forget to wrap the ARN value with double quotes when using in the CLI (for example “arn:aws:lex:us-east-1:11111022222:intent:StartConversation:*”).

    region is the region where the AWS Lambda function is.

  25. All the pre-requisites have now been completed. The import process can now be initiated. Go to the Amazon Lex portal and from the left-hand side menu click Bots.
  26. In the right-hand side screen, from Actions choose Import.
  27. In the next screen, for Upload file browse to the configuration file that you had exported from the source AWS Account (listed above). Click Import.
  28. If you had created a dummy Amazon Lex bot and intent (step 23), a prompt will appear asking to confirm that the intent can be overwritten. Click Overwrite and continue.
  29. Once the import process completes, you will have successfully created a replica of the Amazon Lex bot in the destination AWS Account.
  30. If the source Amazon Lex bot’s IAM role had additional policies attached, open the IAM role for the destination Amazon Lex bot and attach these additional IAM policies.
  31. Proceed to Build and then Publish the imported Amazon Lex bot.

You should now have a successfully migrated Amazon Lex bot in the destination account. Go through the configuration to confirm the settings and test it to ensure it behaves as expected. If you don’t need the source Amazon Lex bot, this can now be safely deleted.

I hope you found this blog useful. Till the next time, Enjoy!

Create an Omnichannel Chatbot using Amazon Lex and Amazon Connect

Background

These days, chatbots are used pretty much everywhere. From getting quotes to resetting password, their use cases are endless. They also elevate the customer experience, as now customers don’t need to call during your helpdesk’s manned hours, instead they can call anytime that is convenient to them. One of the biggest business benefits is that of lessening the load on their support staff.

A good practice to adhere by, when deploying chatbots is to make them channel agnostic. This means that your chatbots are available via the internet and also via a phone call, and they provide the same customer experience. This enables your customers to choose whichever channel suits them best, without any loss of customer experience.

In this blog, I will demonstrate how you can use Amazon Connect and Amazon Lex, to create an omnichannel chatbot.

I will be extending one of my previous blogs, so if you haven’t read it already, I would highly recommend that you do so, prior to continuing. Here is the link to the blog https://nivleshc.wordpress.com/2020/04/08/create-a-web-chatbot-for-generating-life-insurance-quotes-using-amazon-lex/).

High Level Architecture Diagram

Below is the high-level architecture diagram for the solution. We will build the section inside the blue box.

For steps 1 – 5 please refer to https://nivleshc.wordpress.com/2020/04/08/create-a-web-chatbot-for-generating-life-insurance-quotes-using-amazon-lex/

For steps 6 – 8 please refer to https://nivleshc.wordpress.com/2020/05/24/publish-a-web-chatbot-frontend-using-aws-amplify-console/

Steps 9 – 12 will be created in this blog and are described below:

9. The customer calls the phone number for the chatbot. This is attached to a contact flow in Amazon Connect

10. Amazon Connect proxies the customer to the Amazon Lex chatbot (this is the web chatbot created in https://nivleshc.wordpress.com/2020/04/08/create-a-web-chatbot-for-generating-life-insurance-quotes-using-amazon-lex/ )

11. The output from the Amazon Lex chatbot is sent back to Amazon Connect.

12. Amazon Connect converts the output from the Amazon Lex chatbot into audio and then plays it to the customer.

Prerequisites

This blog assumes the following:

  • you already have an Amazon Connect instance deployed and configured.
  • you have a working Amazon Lex web chatbot

You can refer to the following blogs, if you need to deploy either of the above prerequisites

https://nivleshc.wordpress.com/2020/04/08/create-a-web-chatbot-for-generating-life-insurance-quotes-using-amazon-lex/

https://nivleshc.wordpress.com/2020/05/24/publish-a-web-chatbot-frontend-using-aws-amplify-console/

Implementation

Let’s get started.

  1. Login to your AWS Management Console, open the Amazon Connect console and change to the respective AWS region.
  2. Within the Amazon Connect console, choose the instance that will be used and click its Instance Alias.
  3. In the next screen, from the left-hand side menu, click Contact flows. Then, in the right-hand side screen, under Amazon Lex select the Region where the Amazon Lex bot resides. From the Bot drop-down list, select the name of the Amazon Lex bot. Once done, click +Add Lex Bot.
  4. Click Overview from the left-hand side menu, and then click Login URL to open the Amazon Connect administration portal. Enter your administrator credentials (currently only the following internet browsers are supported – latest three versions of Google Chrome, Mozilla Firefox ESR, Mozilla Firefox).
  5. Once logged in, from the left hand-side menu, click Routing and then Contact flows.
  6. Click Create contact flow (located on the top-right). You will now see the contact flow designer.
  7. Enter a name for the contact flow (top left where it says Enter a name)
  8. From the left-hand side menu, expand Interact and drag Get customer input to the right-hand side screen.
  9. Click on the circle to the right of Start in the Entry point block and drag the arrow to the Get customer input block. This will connect the two blocks.
  10. Click on the title of the Get customer input block to open its configuration.
  11. Select Text-to-speech or chat text. Click Enter text and in the textbox underneath, type the message you want to play to the customer when they call the chatbot.
  12. Click Amazon Lex and then under Lex bot Name select the Amazon Lex bot that you had created earlier (if the bot doesn’t show, ensure you had carried out step 3 above). Under Intents type the Amazon Lex bot intent that should be invoked. Click Save.

13. From the left-hand side menu, expand Interact and drag two Play prompt blocks to the right-hand side screen.

14. The first Play prompt block will be used to play a goodbye message, after the Amazon Lex bot has finished execution. Click on the title of this block to open its configuration. Click Text-to-speech or chat text and then click Enter text. Enter a message to be played before the call is ended. Click Save.

  15. The second Play prompt block will be used to play a message when an error occurs. Click on the title of this block to open its configuration. Click Text-to-speech or chat text and then click Enter text. Enter a message to be played when an error occurs. Click Save.

16. From the left-hand side menu, expand Terminate / Transfer and drag the Disconnect / hang up block to the right-hand side screen.

17. In the designer (right-hand side screen), in the Get customer input block, click the circle beside startConversation (this is the name of your Amazon Lex bot intent) and drag the arrow to the first Play prompt block.

18. Repeat step 17 for the circle beside Default in the Get customer input block.

19. In the Get customer input block, click the circle beside Error and drag it to the second Play prompt block.

20. Inside both the Play prompt blocks, click the circle beside Okay and drag the arrow to the Disconnect / hang up block.

21. From the top-right, click Save. This will save the work you have done.

22. From the top-right, click Publish. You will get a prompt Are you sure you want to publish this content flow? Click Publish.

23. Once done, you should see a screen similar to the one below:

24. Next, we need to ensure that whenever someone calls, the newly created contact flow is invoked. To do this, from the left-hand side menu, click Routing and then click Phone numbers.

25. In the right-hand side, click the phone number that will be used for the chatbot. This will open its settings. Enter a description (optional) and then from the drop-down list underneath Contact flow / IVR, choose the contact flow that was created above. Click Save.

Give it a few minutes for the settings to take effect. Now, call the phone number that was assigned to the contact flow above. You should be greeted by the welcome message you had created above. The phone chatbot experience would be the same as what you experienced when interacting with the chatbot over the internet!

Congratulations! You just created your first omnichannel chatbot! How easy was that?

Till the next time, Enjoy!

Publish A Web Chatbot Frontend Using AWS Amplify Console

Background

In my previous blog (https://nivleshc.wordpress.com/2020/04/08/create-a-web-chatbot-for-generating-life-insurance-quotes-using-amazon-lex/), I demonstrated how easy it was to create a web chatbot using Amazon Lex. As discussed in that blog, one of the challenges with Amazon Lex is not having an out-of-the-box frontend solution for the bots. This can throw a spanner in the works, if you are planning on showcasing your chatbots to customers, without wanting to overwhelm them with the code. Luckily, with some work, you can create a front-end that exposes just the bot. I provided instructions for achieving this in the same blog.

Having a static website hosted out of an Amazon S3 bucket is good, however it does come with a few challenges. As the website gains popularity, it becomes more integral to your business. Soon, you will not be able to afford any website outages. In such situations, how do you deploy changes to the website without breaking it? How do you track the changes, to ensure they can be rolled back, if something does break? How do you ensure your end-users don’t suffer from slow webpage loads? These are some of the questions that need to be answered, as your website achieves popularity.

AWS Amplify Console provides an out-of-the-box solution for deploying static websites. The contents of the website can be hosted in a source code repository. This provides an easy solution to track changes, and to rollback, should the need arise. AWS Amplify Console serves the website using Amazon CloudFront, AWS’s Content Delivery Network. This ensures speedy page loads for end-users. These are just some of the features that make hosting a static website using AWS Amplify Console a great choice.

In this blog, I will modify my life Insurance quote web chatbot solution, by migrating its frontend from an Amazon S3 bucket to AWS Amplify Console.

High Level Architecture Diagram

Below is a high-level architecture diagram for the solution described in this blog.

Steps 1 – 5 are from my previous blog https://nivleshc.wordpress.com/2020/04/08/create-a-web-chatbot-for-generating-life-insurance-quotes-using-amazon-lex

Steps 6 – 8 will be created in this blog and are described below:

6. Website developer will push changes for the Amazon Lex web chatbot frontend to GitHub.

7. GitHub will inform AWS Amplify about the new changes.

8. AWS Amplify will retrieve the changes from GitHub, build the new web chatbot frontend and deploy it, thereby updating the previous web chatbot        frontend.

Implementation

Before we continue, if you haven’t already, I would highly recommend that you read my Amazon Lex Life Insurance quote generating web chatbot blog ( https://nivleshc.wordpress.com/2020/04/08/create-a-web-chatbot-for-generating-life-insurance-quotes-using-amazon-lex)

Let’s get started.

  1. Login to your AWS Management Console, open the AWS Amplify Console and change to your AWS Region of choice.
  2. Click on Connect app.
  3. Next, choose the location where the source code is hosted. As I have stored the web chatbot frontend files in GitHub, I chose GitHub. Note that the repository should contain only the frontend files. Click Continue.
  4. Unless you had previously authorised AWS Amplify Console access to your source code repository, a screen will pop-up requesting access to your source code repository. Click Authorize aws-amplify-console.
  5. Once successfully authorised, you will be returned to the AWS Amplify Console. Using the dropdown menu beside Select a repository select the repository that contains the frontend code.
  6. Next, choose the Branch to use and then click Next.
  7. The next screen shows the Configure build settings page. AWS Amplify Console will inspect the source code and deduce the appropriate App build and test settings. If what is shown is incorrect, or you would like to modify it, you can use the Edit button.

    In my case, I did not find anything needed changing from what AWS Amplify Console had provided.

    You can change the App name, if this needs to be different from what is automatically provided.

    Click Next.

  8. In the next screen, review all the settings. Once confirmed, click Save and deploy.
  9. AWS Amplify Console will start creating the application. You will be redirected to the application’s configuration page, where on the right, a continuous deployment pipeline, similar to the one below, will be shown.

  10. Wait for all stages of the pipeline to complete successfully and then click on the url on the left (the one similar to https://master…amplifyapp.com). The page that opens next is the Insurance Chatbot Frontend, served by AWS Amplify Console! (below is how the web chatbot looks like)

  11. Now, whenever the frontend files are modified and pushed into the master branch of the source code repository (GitHub in this case), AWS Amplify Console will automatically update the Insurance Chatbot frontend, with all changes easily trackable from within GitHub.
  12. You can use a custom domain name, to make the application URL more personalised (by default, AWS Amplify Console applications are allocated the amplifyapp.com domain urls). To do this, from your application’s configuration page, click Domain management in the left-hand side menu. Then click add domain and follow the instructions.
  13. You might also benefit from email notifications whenever AWS Amplify Console updates your application. To configure this, from your application’s configuration page, click Notifications in the left-hand side menu. Then click Add notification and add an email address to receive notifications for successful and failed builds.
  14. To view site access logs, from your applications configuration page, click Access logs in the left-hand side menu.

There you go. Hopefully this provides valuable information for those looking for an easy solution to deploy their static websites in a consistent, auditable and highly available manner.

Till the next time, Enjoy!

Automate Training, Build And Deployment Of Amazon SageMaker Models Using AWS Step Functions

Background

A few weeks back, I was tasked with automating the training, build and deployment of an Amazon SageMaker model. Initially, I thought that an AWS Lambda Function would be the best candidate for this, however as I started experimenting, I quickly realised that I needed to look elsewhere.

After some research, I found articles that pointed me towards AWS Step Functions. As it happens, AWS has been making AWS Step Functions more Amazon SageMaker friendly, to the point that AWS Step Functions now natively supported most of the Amazon SageMaker APIs.

With the technology decided, I started figuring out how I would achieve what I had set out to do. I did find some good documentation and examples, however they didn’t entirely cover everything I was after.

After much research and experimentation, I finally created a solution that was able to automatically train, build and then deploy an Amazon SageMaker model.

In this blog, I will outline the steps I followed, with the hope that it benefits those wanting to do the same, saving them countless hours of frustration and experimentation.

High Level Architecture Diagram

Below is a high-level architecture diagram of the solution I used.

The steps (as denoted by the numbers in the diagram) are as follows:

  1. The first AWS Step Function state calls the Amazon SageMaker API to create a Training Job, passing it all the necessary parameters.
  2. Using the supplied parameters, Amazon SageMaker downloads the training and validation files from the Amazon S3 bucket, and then runs the training algorithm. The output is uploaded to the same Amazon S3 bucket that contained that training and validation files.
  3. The next AWS Step Function state calls the Amazon SageMaker API to create a model,  using the artifacts from the Training Job.
  4. The next AWS Step Function state calls the Amazon SageMaker API to create an endpoint configuration, using the model that was created in the previous state.
  5. The next AWS Step Function state calls the Amazon SageMaker API to create a model endpoint, using the endpoint configuration that was created in the previous state.
  6. Using the endpoint configuration, Amazon SageMaker deploys the model using Amazon SageMaker Hosting Services, making it available to any client wanting to use it.

Let’s get started.

Implementation

For this blog, I will be using the data and training parameters described in the Amazon SageMaker tutorial at https://docs.aws.amazon.com/sagemaker/latest/dg/gs-console.html

1. Create an Amazon S3 bucket. Create a folder called data inside your Amazon S3 bucket, within which, create three subfolders called train, validation and test (technically these are not folders and subfolders, but keys. However, to keep things simple, I will refer to them as folders and subfolders).

2. Follow Step 4 from the above-mentioned Amazon SageMaker tutorial (https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-preprocess-data.html) to download and transform the training, validation and test data. Then upload the data to the respective subfolders in your Amazon S3 bucket (we won’t be using the test data in this blog, however you can use it to test the deployed model).

For the next three months, you can download the transformed training, validation and test data from my Amazon S3 bucket using the following URLs

https://niv-sagemaker.s3-ap-southeast-2.amazonaws.com/data/train/examples

https://niv-sagemaker.s3-ap-southeast-2.amazonaws.com/data/validation/examples

https://niv-sagemaker.s3-ap-southeast-2.amazonaws.com/data/test/examples

3. Create an AWS IAM role with the following permissions
AmazonSageMakerFullAccess (AWS Managed Policy)

and a custom policy to read, write and delete objects from the Amazon S3 bucket created in Step 1. The policy will look similar to the one below

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucketName"
]
},
{
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucketName/*"
]
}
]
}

where bucketName is the name of the Amazon S3 bucket created in Step 1 above.

4. Open the AWS Step Functions console and change to the AWS Region where the Amazon SageMaker model endpoint will be deployed.

5. Create a new state machine, choose Author with code snippets and set the type to Standard.

6. Under Definition delete everything and paste the following
{
"Comment": "An AWS Step Function State Machine to train, build and deploy an Amazon SageMaker model endpoint",
"StartAt": "Create Training Job",

The above commands provide a comment describing the purpose of this AWS Step Function state machine and sets the first state name as Create Training Job.

For a full list of Amazon SageMaker APIs supported by AWS Step Functions, please refer to https://docs.aws.amazon.com/step-functions/latest/dg/connect-sagemaker.html

7. Use the following code to create the first state (these are the training parameters described in the above-mentioned Amazon SageMaker tutorial).

"States": {
"Create Training Job": {
"Type": "Task",
"Resource": "arn:aws:states:::sagemaker:createTrainingJob.sync",
"Parameters": {
"TrainingJobName.$": "$$.Execution.Name",
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
},
"HyperParameters": {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"silent": "0",
"objective": "multi:softmax",
"num_class": "10",
"num_round": "10"
},
"AlgorithmSpecification": {
"TrainingImage": "544295431143.dkr.ecr.ap-southeast-2.amazonaws.com/xgboost:1",
"TrainingInputMode": "File"
},
"OutputDataConfig": {
"S3OutputPath": "s3://bucketName/data/modelartifacts"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 86400
},
"RoleArn": "iam-role-arn",
"InputDataConfig": [
{
"ChannelName": "train",
"ContentType": "text/csv",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": "s3://bucketName/data/train",
"S3DataDistributionType": "FullyReplicated"
}
}
},
{
"ChannelName": "validation",
"ContentType": "text/csv",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": "s3://bucketName/data/validation",
"S3DataDistributionType": "FullyReplicated"
}
}
}
]
},
"Retry": [
{
"ErrorEquals": [
"SageMaker.AmazonSageMakerException"
],
"IntervalSeconds": 1,
"MaxAttempts": 1,
"BackoffRate": 1.1
},
{
"ErrorEquals": [
"SageMaker.ResourceLimitExceededException"
],
"IntervalSeconds": 60,
"MaxAttempts": 1,
"BackoffRate": 1
},
{
"ErrorEquals": [
"States.Timeout"
],
"IntervalSeconds": 1,
"MaxAttempts": 1,
"BackoffRate": 1
}
],
"Catch": [
{
"ErrorEquals": [
"States.ALL"
],
"ResultPath": "$.cause",
"Next": "Display Error"
}
],
"Next": "Create Model"
},

I would like to call out a few things from the above code

“Resource”: “arn:aws:states:::sagemaker:createTrainingJob.sync” refers to the Amazon SageMaker API for creating a Training Job. When this state task runs, you will be able to see this Training Job in the Amazon SageMaker console.

TrainingJobName is the name given to the Training Job and it must be unique within the AWS Region, in the AWS account. In my code, I am setting this to the Execution Name (internally referred to as $$.Execution.Name), which is an optional parameter that can be supplied when executing the AWS Step Function state machine. By default, this is set to a unique random string, however to make the Training Job name more recognisable, provide a more meaningful unique value when executing the state machine. I tend to use the current time in the format <training-algorithm>-<year><month><date>-<hour><minute><second>

If you have ever used Jupyter notebooks to run an Amazon SageMaker Training Job, you would have have used a line similar to the following:

        container = get_image_uri(boto3.Session().region_name, ‘xgboost’)

Yes, your guess is correct! Amazon SageMaker uses containers for running Training Jobs. The above assigns the xgboost training algorithm container from the region that the Jupyter notebook is running in.

These containers are hosted in Amazon Elastic Container Registry (Amazon ECR) and maintained by AWS. For each training algorithm that Amazon SageMaker supports, there is a specific container. Details for these containers can be found at https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html.

When submitting a Training Job using AWS Step Functions, you must supply the correct container name, from the correct region (the region where you will be running Amazon SageMaker from). This information is passed using the parameter TrainingImage. To find the correct container path, use https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html.

Another method for getting the value for TrainingImage is to manually submit a Training Job using the Amazon SageMaker console, using the same training algorithm that you will be using in the AWS Step Function state machine. Once the job has started, open it and have a look under the section Algorithm. You will find the Training Image for that particular training algorithm, for that region, listed there. You can use this value for TrainingImage.

S3OutputPath is the location where Amazon SageMaker will store the model artifacts after the Training Job has successfully finished.

RoleArn is the ARN of the AWS IAM Role that was created in Step 3 above

S3Uri under ChannelName: train is the Amazon S3 bucket path to the folder where the training data is located

S3Uri under ChannelName:validation is the Amazon S3 bucket path to the folder where the validation data is located

DON’T FORGET TO CHANGE bucketName TO THE AMAZON S3 BUCKET THAT WAS CREATED IN STEP 1 ABOVE

In the next AWS Step Function state, the model will be created using the artifacts generated from the Training Job.

8. An AWS Step Function state receives input parameters, does its processing and then produces an output. If there is another state next in the path, the output is provided as an input to that state. This is an elegant way for passing dynamic information between states.
Here is the code for the next AWS Step Functions state.
"Create Model": {
"Parameters": {
"PrimaryContainer": {
"Image": "544295431143.dkr.ecr.ap-southeast-2.amazonaws.com/xgboost:1",
"Environment": {},
"ModelDataUrl.$": "$.ModelArtifacts.S3ModelArtifacts"
},
"ExecutionRoleArn": "iam-role-arn",
"ModelName.$": "$.TrainingJobName"
},
"Resource": "arn:aws:states:::sagemaker:createModel",
"Type": "Task",
"ResultPath":"$.taskresult",
"Next": "Create Endpoint Config"
},

Image refers to the same container that was used in the Create Training Job state.

ModelDataUrl refers to the location where the model artifacts that were created in the previous state are stored. This value is part of the output (input to this state) from the previous state. To reference it, use $.ModelArtifacts.S3ModelArtifacts

ExecutionRoleArn is the ARN of the AWS IAM Role that was created in Step 3 above.

“Resource”: “arn:aws:states:::sagemaker:createModel” refers to the Amazon SageMaker API for creating a model

To keep things simple, the name of the generated model will be set to the TrainingJobName. This value is part of the output (input to this state) from the previous state. To reference it, use $.TrainingJobName

After this state finishes execution, you will be able to see the model in the Amazon SageMaker console.

The next state is for creating an Endpoint Configuration using the model that was just created.

Before we continue, I want to point out an additional parameter that I am using “ResultPath”:”$.taskresult”. Let me explain the reason for using this. In my next state, I must provide the name of the model that will be used to create the Endpoint Configuration. Unfortunately, this name is not part of the output of the current state Create Model, so I won’t be able to reference it. However, as you might remember, for simplicity, we set the model name to be the same as TrainingJobName and guess what, this is part of the current states input parameters! Now, if only there was a way for me to make the current state to include its input parameters in its output. Oh wait! There is a way. Using the command  “ResultPath”:”$.taskresult” instructs this AWS Step Function state to include its input parameters in its output.

9. Here is the code for the AWS Step Function state to create an Endpoint Config.

"Create Endpoint Config": {
"Type": "Task",
"Resource": "arn:aws:states:::sagemaker:createEndpointConfig",
"Parameters":{
"EndpointConfigName.$": "$.TrainingJobName",
"ProductionVariants": [
{
"InitialInstanceCount": 1,
"InstanceType": "ml.t2.medium",
"ModelName.$": "$.TrainingJobName",
"VariantName": "AllTraffic"
}
]
},
"ResultPath":"$.taskresult",
"Next":"Create Endpoint"
},

This state is pretty straight forward.

“Resource”: “arn:aws:states:::sagemaker:createEndpointConfig” refers to the Amazon SageMaker API to create an Endpoint Configuration

For simplicity, we will set the Endpoint Configuration name to be the same as the TrainingJobName. The Endpoint will be deployed initially using one  ml.t2.medium instance.

As in the previous state, we will use “ResultPath”:”$.taskresult” to circumvent the lack of parameters in the output of this state.

In the final state, I will instruct Amazon SageMaker to deploy the model endpoint.

10. Here is the code for the final AWS Step Function state.

"Create Endpoint":{
"Type":"Task",
"Resource":"arn:aws:states:::sagemaker:createEndpoint",
"Parameters":{
"EndpointConfigName.$": "$.TrainingJobName",
"EndpointName.$": "$.TrainingJobName"
},
"End": true
},

The Endpoint Configuration from the previous state is used to deploy the model endpoint using Amazon SageMaker Hosting Services.

“Resource”:”arn:aws:states:::sagemaker:createEndpoint” refers to the Amazon SageMaker API for deploying an endpoint using Amazon SageMaker Hosting Services. After this state completes successfully, the endpoint is visible in the Amazon SageMaker console.

The name of the Endpoint, for simplicity is set the same as TrainingJobName

To keep things tidy, it is nice to display an error when things don’t go as planned. There is an AWS Step Function state for that!

11. Here is the code for the state that displays the error message. This state only gets invoked if there is an error in the Create Training Job state.

"Display Error":{
"Type": "Pass",
"Result": "Finished with errors. Please check the individual steps for more information",
"End": true
}

The full AWS Step Function state machine code is available at  https://gist.github.com/nivleshc/a4a99a5c2bca1747b6da0d7da0e388c1

When creating the AWS Step Function state machine, you will be asked for an AWS IAM Role that will be used by the state machine to run the states. Unless you already have an AWS IAM Role that can carry out all the state tasks, choose the option to create a new AWS IAM Role.

To invoke the AWS Step Function state machine, just click on new execution and provide a name for the execution id. As each of the states are run, you will see the visual feedback in the AWS Step Function schematic. You will be able to see the tasks in the Amazon SageMaker console as well.

To take the above one step further, you could invoke the AWS Step Function state machine whenever new training and validation data is available in the Amazon S3 bucket. The new model can then be used to update the existing model endpoint.

Thats it folks! This is how you can automatically train, build and deploy an Amazon SageMaker model!

Once you are finished, don’t forget to clean-up, to avoid any unnecessary costs.

The following must be deleted using the Amazon SageMaker console

  • The model endpoint
  • The Endpoint Configuration
  • The model
  • Any Jupyter Notebook instances you might have provisioned and don’t need anymore
  • Any Jupyter notebooks that are not needed anymore

If you don’t have any use for the following, these can also be deleted.

  • The contents of the Amazon S3 bucket and the bucket itself
  • The AWS IAM Role that was created and the custom policy to access the Amazon S3 bucket and its contents
  • The AWS Step Function state machine

Till the next time, Enjoy!

Create A Web Chatbot For Generating Life Insurance Quotes Using Amazon Lex

Background

A few weeks back, I was asked to create a proof of concept web based chatbot for one of our clients. The web chatbot was to be used for generating life insurance quotes. The requirements were quite simple: ask a customer a few critical questions, use the responses to approximate their yearly premium and then display the premium on a webpage. Simple!

I don’t like reinventing the wheel and where possible, I leverage existing AWS services. For the task at hand, I decided to use Amazon Lex.

This blog provides the instructions for creating a web-based life insurance quote generating chatbot. It also highlights some of the challenges I faced while going from ideation to the finished product.

Let’s begin.

High Level Architecture

Below is a high-level overview of what I built.

  1. The customer will browse to the chatbot website.
  2. The customer will invoke the Amazon Lex Bot.
  3. The Amazon Lex Bot will ask a few questions and then pass the responses to an AWS Lambda function, to approximate the yearly premium for the customer.
  4. The response from the AWS Lambda function will be passed back to the Amazon Lex Bot.
  5. The Amazon Lex Bot will display the yearly premium estimate on the chatbot website.

Implementation

Let’s build the various components, as shown in the high-level overview.

AWS Lambda Function

When the Amazon Lex Bot invokes AWS Lambda, it actually calls the lambda_handler function and passes all the relevant parameters. The AWS Lambda will then use the supplied information to calculate the yearly premium and return the result.

I have pasted below the AWS Lambda Python 3.7 code that I used (getLifeInsuranceQuote). Do pay attention to the format of the return value from the AWS Lambda function. This is the format that Amazon Lex expects. To estimate the yearly premium, my AWS Lambda function called a machine learning model that had been pre-trained with life insurance data.

import json
from botocore.vendored import requests
from dateutil.relativedelta import relativedelta
from datetime import datetime
def lambda_handler(event, context):
print("event:"+str(event))
print("context:"+str(context))
customer_state = event['currentIntent']['slots']['State']
customer_firstname = event['currentIntent']['slots']['FirstName']
customer_lastname = event['currentIntent']['slots']['LastName']
customer_dob_str = event['currentIntent']['slots']['DOB']
customer_coverlevel = event['currentIntent']['slots']['CoverLevel']
customer_smoker = event['currentIntent']['slots']['Smoker']
customer_gender = event['currentIntent']['slots']['Gender']
print(customer_state)
print(customer_firstname)
print(customer_lastname)
print(customer_dob_str)
print(customer_coverlevel)
print(customer_smoker)
print(customer_gender)
date_now = datetime.now()
date_now_year = date_now.year
customer_dob_year = int(customer_dob_str)
customer_age = date_now_year customer_dob_year
print("Customer YOB:" + customer_dob_str)
print("Customer age:" + str(customer_age))
if customer_gender == "Female":
sex = 0
else:
sex = 1
if customer_smoker == "NO":
smoker = 0
else:
smoker = 1
url = "urlformlmodelapi
data = {"age": customer_age, “state”: customer_state, "sex": sex, "smoker": smoker}
r = requests.post(url,json=data)
premium = r.json()['claim_pred']
print("premium: " + str(premium))
message = customer_firstname + " from what you have told me, your monthly premiums will be approximately $" + str(round(premium/12))
return {
"sessionAttributes": {},
"dialogAction": {
"type": "Close",
"fulfillmentState": "Fulfilled",
"message": {
"contentType": "PlainText",
"content": message
}
}
}

Amazon Lex Bot

In this section, I will take you through the steps to create the Amazon Lex bot.

  1. Sign into the AWS console and then browse to the Amazon Lex service page
  2. On the left-hand side of the screen, click on Bots and then from the right-hand side, click Create.
  3. In the next screen, click Custom bot.
  4. Give the bot a name (I called mine LifeInsuranceBot) and set Output voice to None. This is only a text-based application.
  5. Set the Session timeout to 5 minutes.
  6. Leave Sentiment Analysis set to No. Leave the IAM role set to the default settings. Set COPPA to No.
  7. Click Create. The Amazon Lex bot will now be created for you.
  8. In the next screen, click on the Editor tab from the top menu.

    Before we continue, let’s go over some terminology that is used by Amazon Lex.

    Intents – an intent, in its simplest form, encapsulates what you are trying to achieve. In our case, the intent is to generate a life insurance quote.
    Utterances – this describes the possible phrases that a user could use to invoke an intent. Amazon Lex uses utterances to identify which intent it should pass the request to. In our situation, a possible utterance would be “I would like to get a life insurance quote”.
    Slots – these can be thought of as variables. You would ask the user a question and the response will be held in a slot. Like variables, a slot must have a type. The slot type is used by Amazon Lex to convert the user’s response into the correct format. For example, if you ask the user for their date of birth, the slot that will capture their response must have a type of AMAZON.DATE. This ensures that the date of birth is stored as a date.

  9. From the left-hand side menu, click on the plus sign beside Intents and then click Create Intent. You will be asked for a unique name for your intent. In my case, I set the intent name to generateLifeInstanceQuote.
  10. In the right-hand side, under Sample utterances enter phrases that will invoke this intent. I set this to I would like to get a life insurance quote.
  11. Amazon Lex comes with a lot of built-in slot types, however if they don’t match your use case, you can easily create custom slot types. For our intent, we will create three custom slot types.

    From the left-hand side, click on the plus beside Slot types and then create new slot types as per the following screenshots

  12. On to the Slots! As I mentioned previously, slots are used by Amazon Lex to get responses from the user. Each slot has a name, a type and a prompt. The prompt is what Amazon Lex will ask the user, and the response is stored in that particular slot. The prompts are asked in the order of priority assigned to them, from lowest to highest. I prefer to give some “character” to my prompts, to keep users engaged. ProTip: You can reference other slots in your prompt by enclosing the slot name within {} For example, if you are capturing the user’s name in the slot name FirstName, then you can prompt the user with Hello {FirstName}, how are you today?. When Amazon Lex prompts the user, it will insert the user’s name in place of FirstName. A touch of personalisation with minimal effort!
    Create slots as per the following screenshot.The slot type for FirstName is AMAZON.US_FIRST_NAME and the prompt is “Ok, I can help you with that. Let me get some details first. What is your first name?”

    The slot type for LastName is AMAZON.US_LAST_NAME

    The prompt for DOB is “Thanks {FirstName}. What year were you born in?”

    The prompt for CoverLevel is “Thank you for answering the questions {FirstName}. What amount do you want to take out the life insurance for?”

  13. The responses from the user will be passed to the AWS Lambda function, the result provided to the user.

    To enable this, in the right-hand side, under Fulfillment select AWS Lambda function and choose the AWS Lambda function that was created from the drop down beside Lambda function. Choose the appropriate version or alias.

  14. Click Save Intent.
  15. To get the Bot ready, click on Build from the top right-hand side of the screen.
  16. After the build is complete, you can test the Bot by using the Test Chatbot console from the top right-hand side of the screen.

Time to Panic!!!

Having successfully tested the Amazon Lex Bot, I was quite impressed with myself. But wait! I couldn’t find any way to “publish” it to a website! I didn’t want to showcase this Bot by using the “Test Chatbot” console! This is when I started panicking!

A Life Saver!

After searching frantically, I came across https://aws.amazon.com/blogs/machine-learning/greetings-visitor-engage-your-web-users-with-amazon-lex/, this article had exactly what I needed, a way to integrate my Amazon Lex bot with a html front end! Yay!

Amazon Cognito Identity Pool

    1. Go to the Amazon Cognito service page and then click on Manage Identity Pools. Then click on Create new identity pool.
    2. Provide a name for the Identity pool (I named mine LifeInsuranceBotPool) and tick the option Enable access to unauthenticated identities and click Create Pool.
    3. In the next screen, you will be asked to assign an IAM role. Click on View Details and note down the name of the roles that will be created. Then click Allow.
    4. The identity pool will be created and in the next screen, a sample code with the IdentityPoolId will be shown. Change the platform to Javascript and note down the IdentityPoolId and the region.
    5. Go to the AWS IAM service page and locate the two IAM roles that Amazon Cognito had created. Open each one of them and attach the following additional policies:
      • AmazonLexRunBotsOnly
      • AmazonPollyReadOnlyAccess

A front-end for our Amazon Lex Bot

The front end will be a static html page, served from an Amazon S3 bucket.

To make things easier, I have extracted the html code for the static website from  https://aws.amazon.com/blogs/machine-learning/greetings-visitor-engage-your-web-users-with-amazon-lex/ .

It is available at https://gist.github.com/nivleshc/bff75e30cc4f0133aab3abde8248814f

Save the above file as index.html and then carry out the following modifications.

Locate the following lines of code in index.html (lines 68 – 97)

// Initialize the Amazon Cognito credentials provider
AWS.config.region = 'us-east-1'; // Region
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
// Provide your Pool Id here
IdentityPoolId: 'us-east-1:XXXXX',
});
var lexruntime = new AWS.LexRuntime();
var lexUserId = 'chatbot-demo' + Date.now();
var sessionAttributes = {};
function pushChat() {
// if there is text to be sent…
var wisdomText = document.getElementById('wisdom');
if (wisdomText && wisdomText.value && wisdomText.value.trim().length > 0) {
// disable input to show we're sending it
var wisdom = wisdomText.value.trim();
wisdomText.value = '…';
wisdomText.locked = true;
// send it to the Lex runtime
var params = {
botAlias: '$LATEST',
botName: 'BookTrip',
inputText: wisdom,
userId: lexUserId,
sessionAttributes: sessionAttributes
};

Change the values for AWS.config.region and AWS.config.credentials to what was displayed in the sample code when the Amazon Cognito Identity pool was created.

Replace chatbot-demo in the variable lexUserId to something more descriptive. This will be the user created within your Amazon Cognito Identity pool whenever the static website is accessed (for my deployment, I set this to lifeinsurancebot).

Change the botName to the name of your Amazon Lex Bot (in my case, my Amazon Lex Bot was called LifeInsuranceBot). If you have multiple versions of your Amazon Lex Bot and you are not using the latest version, then change the variable botAlias to the version you are using.

You might have noticed that index.html contains a lot of references to the demo chatbot that was created in https://aws.amazon.com/blogs/machine-learning/greetings-visitor-engage-your-web-users-with-amazon-lex/. I would suggest going through the html code and changing these references so that they refer to your own Amazon Lex Bot.

Next, we need a html page that will be returned when an error occurs with our static website. As good chefs do, I prepared one earlier. Download the contents of https://gist.github.com/nivleshc/853c7efc7979bdff6b5cc1a49074b9ce and save it as error.html.

Follow the steps below to create an Amazon S3 hosted static website

  1. Create an Amazon S3 bucket in a region closest to your Amazon Lex Bot (it is highly recommended to have the Amazon Lex Bot in an AWS region that is closest to your users)
  2. Upload the two files from above (index.html and error.html) to the Amazon S3 bucket. Change the permissions on these two files so that they are publicly accessible.
  3. Enable the Amazon S3 bucket for static website hosting. Note down the endpoint address shown in the Static website hosting section.This is the website’s address (URL).

Thats it folks! The Life Insurance Bot will now be alive!. To access it, open your favourite internet browser and go to the static website’s endpoint address.

The final product!

I must admit, a lot of work went into making this Amazon Lex Bot, however it is easily justified by the end result! One thing I would like to state is that, this prototype didn’t take more than 2 days to build. The speed at which you can create proof-of-concepts in AWS gives you a great advantage over your competitors.

Below is a screenshot of what my LifeInsuranceBot looks like in action. If you followed through, yours would be similar.

I hope this blog was useful and comes in handy when you are trying to create your own web chatbots.

Till the next time, Enjoy!

Creating a Contact Center in minutes using Amazon Connect

Background

In my previous blog (https://nivleshc.wordpress.com/2019/10/09/managing-amazon-ec2-instances-using-amazon-ses/), I showed how we can manage Amazon EC2 instances using emails. However, what if you wanted to go further than that? What if, instead of sending an email, you instead wanted to dial in and check the status of or start/stop your Amazon EC2 instances?

In this blog, I will show how I used the above as a foundation to create my own Contact Center. I enriched the experience by including an additional option for the caller, to be transferred to a human agent. All this in minutes! Still skeptical? Follow on and I will show you how I did all of this using Amazon Connect.

High Level Solution Design

Below is the high-level solution design for the Contact Center I built.

The steps (as denoted by the numbers in the diagram above) are explained below

  1. The caller dials the Direct Inward Dial (DID) number associated with the Amazon Connect instance
  2. Amazon Connect answers the call
  3. Amazon Connect invokes the AWS Lambda function to authenticate the caller.
  4. The AWS Lambda function authenticates the caller by checking their callerID against the entries stored in the authorisedCallers DynamoDB table. If there is a match, the first name and last name stored against the callerID is returned to Amazon Connect. Otherwise, an “unauthorised user” message is returned to Amazon Connect.
  5. If the caller is unauthorised, Amazon Connect informs them of this and hangs up the call.
  6. If the caller is authorised, Amazon Connect uses the first name and last name provided by AWS Lambda function to personalise a welcome message for them. Amazon Connect then provides the caller with two options:
      •  (6a) press 1 to get the status of the Amazon EC2 instances. If this is pressed, Amazon Connect invokes an AWS Lambda function to get the status of the Amazon EC2 instances and plays the results to the caller
      • (6b) press 2 to talk to an agent. If this is pressed, Amazon Connect places the call in a queue,  where it will be answered by the next available agent

     

Preparation

My solution requires the following components

  • Amazon DynamoDB table to store authorised callers (an item in this table will have the format phonenumber, firstname,  lastname)
  • AWS Lambda function to authenticate callers
  • AWS Lambda function to get the status of all Amazon EC2 instances in the region

I created the following AWS CloudFormation template to provision the above resources.

AWSTemplateFormatVersion: "2010-09-09"
Description: Template for deploying Amazon DynamoDB and AWS Lambda functions that will be used by the Amazon Connect instance
Parameters:
authorisedUsersTablename:
Description: Name of the Amazon DynamoDB Table that will be created to store phone numbers for approved callers to Amazon Connect
Type: String
Default: amzn-connect-authorisedUsers
DynamoDBBillingMode:
Description: Billing mode to be used for authorisedUsers Amazon DynamoDB Table
Type: String
AllowedValues: [PAY_PER_REQUEST]
Resources:
authoriseCallerLambdaExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
Effect: Allow
Principal:
Service:
lambda.amazonaws.com
Action:
sts:AssumeRole
Path: "/"
Policies:
PolicyName: logsStreamAccess
PolicyDocument:
Version: '2012-10-17'
Statement:
Effect: Allow
Action:
logs:CreateLogGroup
logs:CreateLogStream
logs:PutLogEvents
Resource: arn:aws:logs:*:*:*
PolicyName: DynamoDBAccess
PolicyDocument:
Version: '2012-10-17'
Statement:
Effect: Allow
Action:
dynamodb:Query
Resource: !GetAtt authorisedUsersTable.Arn
getInstanceStatusLambdaExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
Effect: Allow
Principal:
Service:
lambda.amazonaws.com
Action:
sts:AssumeRole
Path: "/"
Policies:
PolicyName: logsStreamAccess
PolicyDocument:
Version: '2012-10-17'
Statement:
Effect: Allow
Action:
logs:CreateLogGroup
logs:CreateLogStream
logs:PutLogEvents
Resource: arn:aws:logs:*:*:*
PolicyName: EC2DescribeAccess
PolicyDocument:
Version: '2012-10-17'
Statement:
Effect: Allow
Action:
"ec2:Describe*"
Resource: "*"
authoriseCallerFunctionPolicy:
Type: AWS::Lambda::Permission
Properties:
Action: lambda:InvokeFunction
FunctionName: !GetAtt
authoriseCaller
Arn
Principal: connect.amazonaws.com
getInstanceStatusFunctionPolicy:
Type: AWS::Lambda::Permission
Properties:
Action: lambda:InvokeFunction
FunctionName: !GetAtt
getInstanceStatus
Arn
Principal: connect.amazonaws.com
authorisedUsersTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: !Ref authorisedUsersTablename
AttributeDefinitions:
AttributeName: phoneNumber
AttributeType: S
KeySchema:
AttributeName: phoneNumber
KeyType: HASH
BillingMode: !Ref DynamoDBBillingMode
authoriseCaller:
Type: AWS::Lambda::Function
Properties:
FunctionName: "amzn-connect-authoriseCaller"
Description: "This function checks if the caller is authorised to use the Amazon Connect Service"
Handler: index.lambda_handler
Runtime: python3.6
Role: !GetAtt 'authoriseCallerLambdaExecutionRole.Arn'
Environment:
Variables:
AUTHORISEDUSERSTABLE: !Ref authorisedUsersTable
Code:
ZipFile: |
import boto3
import os
from boto3.dynamodb.conditions import Key, Attr
def lambda_handler(event, context):
print("event:",event)
print("context:",context)
authorisedUsersTable = os.environ['AUTHORISEDUSERSTABLE']
callerID = event["Details"]["ContactData"]["CustomerEndpoint"]["Address"]
#Establish connection to dynamoDB and retrieve table
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(authorisedUsersTable)
response = table.query(KeyConditionExpression=Key('phoneNumber').eq(callerID))
if (len(response['Items']) > 0):
firstName = response['Items'][0]['firstName']
lastName = response['Items'][0]['lastName']
else:
firstName = 'unauthorised'
lastName = 'unauthorised'
callerDetails = {
'phoneNumber' : callerID,
'firstName' : firstName,
'lastName' : lastName
}
print("CallerDetails:",str(callerDetails))
return callerDetails
getInstanceStatus:
Type: AWS::Lambda::Function
Properties:
FunctionName: "amzn-connect-getInstanceStatus"
Description: "This function checks and reports the number of EC2 instances that are running and stopped in the AWS region where this AWS Lambda function is running"
Handler: index.lambda_handler
Runtime: python3.6
Role: !GetAtt 'getInstanceStatusLambdaExecutionRole.Arn'
Code:
ZipFile: |
import boto3
def lambda_handler(event, context):
print("event:",event)
print("context",context)
ec2 = boto3.client("ec2")
ec2_status_running = ec2.describe_instances(
Filters=[
{
'Name':'instance-state-name',
'Values':['running']
}
]
)
ec2_status_running = ec2.describe_instances(
Filters=[
{
'Name':'instance-state-name',
'Values':['running']
}
]
)
ec2_status_stopped = ec2.describe_instances(
Filters=[
{
'Name':'instance-state-name',
'Values':['stopped']
}
]
)
num_ec2_running = len(ec2_status_running['Reservations'])
num_ec2_stopped = len(ec2_status_stopped['Reservations'])
result = {
'numberEC2Running': num_ec2_running,
'numberEC2Stopped': num_ec2_stopped
}
print("Number of EC2 instances running:",num_ec2_running)
print("Number of EC2 instances stopped:",num_ec2_stopped)
return result

The above AWS CloudFormation template can be downloaded from https://gist.github.com/nivleshc/926259dbbab22dd4890e0708cf488983

Implementation

Currently AWS CloudFormation does not support Amazon Connect. The implementation must be done manually.

Leveraging on my own experience with setting up Amazon Connect solutions,  I observed that there are approximately three stages that are required to get a Contact Center up and running. These are:

  • Provisioning an Amazon Connect instance – this is straight forward and essentially is where an Amazon Connect instance is provisioned and made ready for your use
  • Configuring the Amazon Connect instance – this contains all the tasks to customise the Amazon Connect instance. It includes the configuration of the Direct Inward Dial (DID), hours or operations for the Contact Center, Routing profiles, users etc
  • Creating a custom Contact flow – a Contact flow defines the customer experience of your Contact Center, from start to finish. Amazon Connect provides a few default Contact flows however it is highly recommended that you create one that aligns with your own business requirements.

Follow along and I will show you how to go about setting up each of the above mentioned stages.

Provision the Amazon Connect Instance

  1. From the AWS Console, open the Amazon Connect service. Select the Sydney region (or a region of your choice, however do keep in mind that at the moment, Amazon Connect is only available in a few regions)
  2. Enter an Access URL for your Amazon Connect Instance. This URL will be used to access the Amazon Connect instance once it has been provisioned.
  3. In the next screen, create an administrator account for this Amazon Connect instance
  4. The next prompt is for Telephony options. For my solution, I selected the following:
    1. Incoming calls: I want to handle incoming calls with Amazon Connect
    2. Outgoing calls: I want to make outbound calls with Amazon Connect
  5. In the next screen, Data Storage options are displayed. For my solution, I left everything as default.
  6. In the next screen, review the configuration and then click Create instance

Configure the Amazon Connect Instance

After the Amazon Connect instance has been successfully provisioned, use the following steps to configure it:

  1. Claim a phone number for your Amazon Connect Instance. This is the number that users will be calling to interact with your Amazon Connect instance (for claiming non toll free local numbers, you must open a support case with AWS, to prove that you have a local business in the country where you are trying to claim the phone number. Claiming a local toll-free number is easier however it is more expensive)
  2. Create some Hour of operation profiles. These will be used when creating a queue
  3. Create a queue. Each queue can have different hours of operation assigned
  4. Create a routing profile. A user is associated with a routing profile, which defines their inbound and outbound queues.
  5. Create users. Once created, assign the users to a predefined security profile (administrator, agent etc) and also assign them to a specific routing profile

Create a custom Contact flow

A Contact flow defines the customer experience of your Contact Center, from start to finish. By default, Amazon Connect provides a few Contact flows that you can use. However, it is highly recommended that you create one that suits your own business requirements.

To create a new Contact flow, follow these steps:

  • Login to your Amazon Connect instance using the Access URL and administrator account (you can also access your Amazon Connect instance using the AWS Console and then click on Login as administrator)
  • Once logged in, from the left-hand side menu, go to Routing and then click on Contact flows
  • In the next screen, click on Create contact flow
  • Use the visual editor to create your Contact flow

Once the Contact flow has been created, attach it to your Direct Inward Dial (DID) phone number by using the following steps:

  • from the left-hand side menu, click on Routing and then Phone numbers.
  • Click on the respective phone number and change its Contact flow / IVR to the Contact flow you want to attach to this phone number.

Below is a screenshot of the Contact flow I created for my solution. It shows the flow logic I used and you can easily replicate it for your own environment. The red rectangles show where the AWS Lambda functions (mentioned in the pre-requisites above) are used.

This is pretty much all that is required to get your Contact Center up and running. It took me approximately thirty minutes from start to finish (this does not include the time required to provision the Amazon DynamoDB tables and AWS Lambda functions). However, I would recommend spending time on your Contact flows as this is brains of the operation. This must be done in conjunction with someone who understands the business really well and knows the outcomes that must be achieved by the Contact Center solution. There is a lot that can be done here and the more time you invest in your Contact flow, the better outcomes you will get.

The above is just a small part of what Amazon Connect is capable of. For its full set of capabilities, refer to https://aws.amazon.com/connect/

So, if you have been dreaming of building your own Contact Center, however were worried about the cost or effort required? Wait no more! You can now easily create one in minutes using Amazon Connect and pay for only what you use and tear it down if you don’t need it anymore. However, before you start, I would strongly recommend that you get yourself familiar with the Amazon Connect pricing model. For example – you get charged a daily rate for any claimed phone numbers that are attached to your Amazon Connect Instance (this is similar to a phone line rental charge). Full pricing is available at https://aws.amazon.com/connect/pricing/).

I hope the above has given you some insights into Amazon Connect. Till the next time, Enjoy!

Building a Breakfast Ordering Skill for Amazon Alexa – Part 1

Introduction

At the AWS Summit Sydney this year, Telstra decided to host a breakfast session for some of their VIP clients. This was more of a networking session, to get to know the clients much better. However, instead of having a “normal” breakfast session, we decided to take it up one level 😉

Breakfast ordering is quite “boring” if you ask me 😉 The waitress comes to the table, gives you a menu and asks what you would like to order. She then takes the order and after some time your meal is with you.

As it was AWS Summit, we decided to sprinkle a bit of technical fairy dust on the ordering process. Instead of having the waitress take the breakfast orders, we contemplated the idea of using Amazon Alexa instead 😉

I decided to give the Alexa skill development a go. However, not having any prior Alexa skill development experience, I anticipated an uphill battle, having to first learn the product and then developing for it. To my amazement, the learning curve wasn’t too steep and over a weekend, spending just 12 hours in total, I had a working proof of concept breakfast ordering skill ready!

Here is a link to the proof of concept skill https://youtu.be/Z5Prr31ya10

I then spent a week polishing the Alexa skill, giving it more “personality” and adding a more “human” experience.

All the work paid off when I got told that my Alexa skill would be used at the Telstra breakfast session! I was over the moon!

For the final product, to make things even more interesting, I created a business intelligence chart using Amazon QuickSight, showing the popularity of each of the food and drink items on the menu. The popularity was based on the orders that were being received.

BothVisualsSidebySide

Using a laptop, I displayed the chart near the Amazon Echo Dot. This was to help people choose what food or drink they wanted to order (a neat marketing trick 😉 ) . If you would like to know more about Amazon QuickSight, you can read about it at Amazon QuickSight – An elegant and easy to use business analytics tool

Just as a teaser, you can watch one of the ordering scenarios for the finished breakfast ordering skill at https://youtu.be/T5PU9Q8g8ys

In this blog, I will introduce the architecture behind Amazon Alexa and prepare you for creating an Amazon Alexa Skill. In the next blog, we will get our hands dirty with creating the breakfast ordering Alexa skill.

How does Amazon Alexa actually work?

I have heard a lot of people use the name “Alexa” interchangeably for the Amazon Echo devices. As good as it is for Amazon’s marketing team, unfortunately, I have to set the records straight. Amazon Echo are the physical devices that Amazon sells that interface to the Alexa Cloud. You can see the whole range at https://www.amazon.com/Amazon-Echo-And-Alexa-Devices/b?ie=UTF8&node=9818047011. These devices don’t have any smarts in them. They sit in the background listening for the “wake” command, and then they start streaming the audio to Alexa Cloud. Alexa Cloud is where all the smarts are located. Using speech recognition, machine learning and natural language processing, Alexa Cloud converts the audio to text. Alexa Cloud identifies the skill name that the user had requested, the intent and any slot values it finds (these will be explained further in the next blog). The intent and slot values (if any) are passed to the identified skill. The skill uses the input and processes it using some form of compute (AWS Lambda in my case) and then passes the output back to Alexa Cloud. Alexa Cloud, converts the skill output to Speech Synthesis Markup Language (SSML) and sends it to the Amazon Echo device. The device then converts the SSML to audio and plays it to the user.

Below is an overview of the process.

alexa-skills-kit-diagram._CB1519131325_

Diagram is from https://developer.amazon.com/blogs/alexa/post/1c9f0651-6f67-415d-baa2-542ebc0a84cc/build-engaging-skills-what-s-inside-the-alexa-json-request

Getting things ready

Getting an Alexa enabled device

The first thing to get is an Alexa enabled device. Amazon has released quite a few different varieties of Alexa enabled devices. You can checkout the whole family here.

If you are keen to try a side project, you can build your own Alexa device using a Raspberry Pi. A good guide can be found at https://www.lifehacker.com.au/2016/10/how-to-build-your-own-amazon-echo-with-a-raspberry-pi/

You can also try out EchoSim (Amazon Echo Simulator). This is a browser-based interface to Amazon Alexa. Please ensure you read the limits of EchoSim on their website. For instance, it cannot stream music

For developing the breakfast ordering skill, I decided to purchase an Amazon Echo Dot. It’s a nice compact device, which doesn’t cost much and can run off any usb power source. For the Telstra Breakfast session, I actually ran it off my portable battery pack 😉

Create an Amazon Account

Now that you have got yourself an Alexa enabled device, you will need an Amazon account to register it with. You can use one that you already have or create a new one. If you don’t have an Amazon account, you can either create one beforehand by going to https://www.amazon.com or you can create it straight from the Alexa app (the Alexa app is used to register the Amazon Echo device).

Setup your Amazon Echo Device

Use the Alexa app to setup your Amazon Echo device. When you login to the app, you will be asked for the Amazon Account credentials. As stated above, if you don’t have an Amazon account, you can create it from within the app.

Create an Alexa Developer Account

To create skills for Alexa, you need a developer account. If you don’t have one already, you can create one by going to https://developer.amazon.com/alexa. There are no costs associated with creating an Alexa developer account.

Just make sure that the username you choose for your Alexa developer account matches the username of the Amazon account to which your Amazon Echo is registered to. This will enable you to test your Alexa skills on your Amazon Echo device without having to publish it on the Alexa Skills Store (the skills will show under Your Skills in the Alexa App)

Create an AWS Free Tier Account

In order to process any of the requests sent to the breakfast ordering Alexa skill, we will make use of AWS Lambda. AWS Lambda provides a cheap and cost-effective way to run code due to the fact that you are only charged for the time that the code is run. There are no costs for any idle time.

If you already have an AWS account, you can use that otherwise, you can sign up for an AWS Free tier account by going to https://aws.amazon.com . AWS provides a lot of services for free for the first 12 months under the Free Tier, with some services continuing the free tier allowance even beyond the 12 months (AWS Lambda is one such). For a full list of Free Tier services, visit https://aws.amazon.com/free/

High Level Architecture for the Breakfast Ordering Skill

Below is the architectural overview for the Breakfast Ordering Skill that I built. I will introduce you to the various components over the next few blogs.Breakfast Ordering System_HighLevelArchitecture

In the next blog, I will take you through the Alexa Developer console, where we will use the Alexa Skills Kit (ASK) to start creating our breakfast ordering skill. We will define the invocation name, intents, slot names for our Alexa Skill. Not familiar with these terms? Don’t worry,  I will explain them in the next blog.  I hope to see you there.

See you soon.

 

Using AWS EC2 Instances to train a Convolutional Neural Network to identify Cows and Horses

Background

Machine Learning (ML) and Artificial Intelligence (AI) has been a hobby of mine for years now. After playing with it approximately 8 years back, I let it lapse till early this year, and boy oh boy, how things have matured! There are products in the market these days that use some form of ML – some examples are Apple’s Siri, Google Assistant, Amazon Alexa.

Computational power has increased to the point where calcuations that took months can now be done within days. However, the biggest change has come about due to the vast amounts of data that the models can be trained on. More data means better accuracy in models.

If you have taken any programming course, you would remember the hello world program. This is a foundation program, which introduces you to the language and gives you the confidence to continue on. The hello world for ML is identifying cats and dogs. Almost every online course I have taken, this is the first project that you build.

For anyone wanting a background on Machine Learning, I would highly recommend Andrew Ng’s https://www.coursera.org/learn/machine-learning in Coursera. However, be warned, it has a lot of maths 🙂 If you are able to get through it, you will get a very good foundational knowledge on ML.

If theory is not your cup of tea, another way to approach ML is to just implement it and learn as you go. You don’t need to get a PhD in ML to start implementing it. This is the philosophy behind Jeremy Howard’s and Rachel Thomas’s http://www.fast.ai. They take you through the implementation steps and introduce you to the theory on a need to know basis, in essence you are doing a top down approach.

I am still a few lessons away from finishing the fast.ai course however, I have learnt so much and I cannot recommend it enough.

In this blog, I will take you through the steps to implement a Convolutional Neural Network (CNN) that will be able to pick out horses from cows. CNNs are quite complicated in nature so we won’t go into the nitty-gritty details on creating them from scratch. Instead, we will use the foundational libraries from fast.ai’s lesson 1 and modify it abit, so that instead of identifying cats and dogs, we will use it to identify cows and horses.

In the process, I will introduce you to a tool that will help you scrape Google for your own image dataset.

Most important of all, I will show you how the amount of data used to train your CNN model affects its accuracy.

So, put your seatbelts on and lets get started!

 

1. Setting up the AWS EC2 Instance

ML requires a lot of processing power. To get really good throughput, it is recommended to use GPUs instead of CPUs. If you were to build a kit to try this at home, it can easily cost you a few thousands of dollars, not to mention the bill for the cooling and electricity usage.

However, with Cloud Computing, we don’t need to go out and buy the whole kit, instead we can just rent it for as long as we want. This provides a much affordable way to learn ML.

In this blog, we will be using AWS EC2 instances. For the cheapest GPU cores, we will use a p2.xlarge instance. Be warned, these cost $0.90/hr, so I would suggest turning them off after using them, otherwise you will surely rack up a huge bill.

Reshma has done a fantastic job of putting together the instructions on setting up an AWS Instance for running fast.ai course lessons. I will be using her instructions, with a few modifications. Reshma’s instructions can be found here.

Ok lets begin.

  • Login to your AWS Console
  • Go to the EC2 section
  • On the top left menu, you will see EC2 Dashboard. Click on Limits under it
  • AWS_Dashboard_EC2_Limits
  • Now, on the right you will see all the type of EC2 instances you are allowed to run. Search for p2.xlarge instances. These have a current limit of zero, meaning you cannot launch them. Click on Request limit increase and then fill out the form to justify why you want a p2.xlarge instance. Once done, click on Submit. In my case, within a few minutes, I received an email saying that my limit increase had been approved.
  • Click on EC2 Dashboard from the left menu
  • Click on Launch Instance
  • In the next screen, in the left hand side menu, click on Community AMIs
  • On the right side of the screen, search for fast.ai
  • From the results, select fastai-part1v2-p2
  • In the next screen (Instance Type) filter by GPU compute and choose p2.xlarge
  • In the next screen configure the instance details. Ensure you get a public IP address (Auto-assign Pubic IP) because you will be connecting to this instance over the internet. Once done, click Next: Add Storage
  • In the next screen, you don’t need to do anything. Just be aware that the community AMI comes with a 80GB harddisk (at $0.10/GB/Month, this will amount to $8/Month). Click Next
  • In the next screen, add any tags for the EC2 Instance. To give the instance a name, you can set the Key to Name and the Value to fastai. Click Next
  • For security groups, all you need to do is allow SSH to the instance. You can leave the source as 0.0.0.0/0 (this allows connections to the EC2 instance from any public IP address). However, if you want to be super secure, you can set the source to your current ip address. However, doing this means that should your public ip address change (hardly any ISPs give you a static IP address, unless you pay extra), you will have to go back into the AWS Console and update the source in the security group. Click Next
  • In the next section, check that all details are correct and then click on Launch. You will be asked for your key pair. You can either choose an existing key pair or create a new one. Ensure you keep the key pair in a safe place because whoever possesses it can connect to your EC2 instance.
  • Now, sit back and relax, Within a few minutes, your EC2 instance will be ready. You can monitor the progress in the EC2 Dashboard

DON’T FORGET TO SHUTDOWN THE INSTANCE WHEN NOT USING IT. AT $0.90/hr, IT MIGHT NOT SEEM MUCH, HOWEVER THE COST CAN EASILY ACCUMULATE TO SOMETHING QUITE EXPENSIVE

2. Creating the dataset

To train our Convolutional Neural Network (CNN), we need to get lots of images of cows and horses. This got me thinking. Why not get it off Google? But, then this provided another challenge. How do I download all the images? Surely I don’t want to be sitting there right clicking each search result and saving it!

After some googling, I landed on https://github.com/hardikvasa/google-images-download. It does exactly as to what I wanted. It will do a google image search using a keyword and download the results.

Install it using the instructions provided in the link above. By default, it only downloads 100 images. As CNNs need lots more, I would suggest installing chromedriver. The instructions to do this is in the Troubleshooting section under ## Installing the chromedriver (with Selenium)

To download 1000 images of cows and horses, use the following command line (for some reason the tool only downloads around 800 images)

  • the downloaded images will be stored in the subfolder cows/downloaded and horses/downloaded in the /Users/x/Documents/images folder.
  • keyword denotes what we are searching for in google. For cows, we will use cow because we want a single cow’s photo. The same for horses.
  • –chromedriver provides the path to where the chromedriver has been stored
  • the images will be in jpg format
googleimagesdownload --keywords "cow" --format jpg --output_directory "/Users/x/Documents/images/" --image_directory "cows/downloaded" --limit 1000 --chromedriver /Users/x/Documents/tools/chromedriver
googleimagesdownload --keywords "horse" --format jpg --output_directory "/Users/x/Documents/images/" --image_directory "horses/downloaded" --limit 1000 --chromedriver /Users/x/Documents/tools/chromedriver

3. Finding and Removing Corrupt Images

One disadvantage of using googleimagedownload script is that, at times a downloaded image cannot be opened. This will cause issues when our CNN tried to use it for training/validating.  To ensure our CNN does not encounter any issues, we will do some housekeeping before hand and remove all corrupt images (images that cannot be opened).

I wrote the following python script to find and move the corrupt images to a separate folder. The script uses the matplotlib library (the same library used by the fast.ai CNN framework) If you don’t have it, you will need to download it from https://matplotlib.org/users/installing.html.

The script assumes that within the root folder, there is a subfolder called downloaded which contains all the images. It also assumes there is a subfolder called corrupt within the root folder. This is where the corrupt images will be moved to. Set the root_folder_path to the parent folder of the folder where the images are stored.

#this script will go through the downloaded images and find those that cannot be opened. These will be moved to the corrupt folder.

#load libraries
import matplotlib.pyplot as plt
import os

#image folder
root_folder_path = '/Users/x/Documents/images/cows/'
image_folder_path = root_folder_path + 'downloaded/'
corrupt_folder_path = root_folder_path + 'corrupt' #folder were the corrupt images will be moved to

#get a list of all files in the img folder
image_files = os.listdir(f'{image_folder_path}')

print (f'Total Image Files Found: {len(image_files)}')
num_image_moved = 0

#lets go through each image file and see if we can read it
for imageFile in image_files:
 filePath = image_folder_path + imageFile
 #print(f'Reading {filePath}')
 try:
 valid_img = plt.imread(f'{filePath}')
 except:
 print (f'Error reading {filePath}. File will be moved to corrupt folder')
 os.rename(filePath,os.path.join(corrupt_folder_path,imageFile))
 num_image_moved += 1

print (f'Moved {num_image_moved} images to corrupt folder')

For some unknown reason, the script, at times, moves good images into the corrupt folder as well. I would suggest that you go through the corrupt images and see if you can open them (there won’t be many in the corrupt folder). If you can, just manually move them back into the downloaded folder.

To make the images easier to handle, lets rename them using the following format.

  • For the images in the cows/downloaded folder rename them to a format CowXXX.jpg where XXX is a number starting from 1
  • For the images in the horses/downloaded folder rename them to a format HorseXXX.jpg where XXX is a number starting from 1

 

4. Transferring the images to the AWS EC2 Instance

In the following sections, I am using ssh and scp which come builtin with MacOS. For Windows, you can use putty for ssh and WinSCP for scp

A CNN (or any other Neural Network model) is trained using a set of images. Once training has finished, to find how accurate the model is, we give it a set of validation images (these are different to those it was trained on, however we know what these images are of) and ask it to identify the images. We then compare the results with what the actual image was, to find the accuracy.

 

In this blog, we will first train our CNN on a small set of images.

Do the following

  • create a subfolder inside the cows folder and name it train
  • create a subfolder inside the cows folder and name it valid
  • move 100 images from the cows/downloaded folder into the cows/train folder
  • move 20 images from the cows/downloaded folder into the cows/valid folder

Make sure the images in the cows/train folder are not the same as those in cows/valid folder

Do the same for the horses images, so basically

  • create a subfolder inside the horses folder and name it train
  • create a subfolder inside the horses folder and name it valid
  • move 100 images from the horses/downloaded folder into the horses/train folder
  • move 20 images from the horses/downloaded folder into the horses/valid folder

Now connect to the AWS EC2 instance the following command line

ssh -i key.pem ubuntu@public-ip

where

  • key.pem is the key pair that was used to create the AWS EC2 instance (if the key pair is not in the current folder then provide the full path to it)
  • public-ip is the public ip address for your AWS EC2 instance (this can be obtained from the EC2 Dashboard)

Once connected, use the following commands to create the required folders

cd data
mkdir cowshorses
mkdir cowhorses/train
mkdir cowhorses/valid
mkdir cowhorses/train/cows
mkdir cowhorses/train/horses
mkdir cowhorses/valid/cows
mkdir cowhorses/valid/horses

Close your ssh session by typing exit

Run the following commands to transfer the images from your local computer to the AWS EC2 instance

To transfer the cows training set
scp -i key.pem /Users/x/Documents/images/cows/train/*  ubuntu@public-ip::~/data/cowshorses/train/cows

To transfer the horses training set
scp -i key.pem /Users/x/Documents/images/horses/train/*  ubuntu@public-ip::~/data/cowshorses/train/horses

To transfer the cows validation set
scp -i key.pem /Users/x/Documents/images/cows/valid/*  ubuntu@public-ip::~/data/cowshorses/valid/cows

To transfer the horses validation set
scp -i key.pem /Users/x/Documents/images/horses/valid/*  ubuntu@public-ip::~/data/cowshorses/valid/horses

5. Starting the Jupyter Notebook

Jupyter Notebooks are one of the most popular tools used by ML and data scientists. For those that aren’t familiar with Jupyter Notebooks, in a nutshell, it a web page that contains descriptions and interactive code. The user can run the code live from within the document. This is possible because Jupyter Notebook’s execute the code on the server it is running on and then displays the result in the web page. For more information, you can check out http://jupyter.org

In our case, we will be running the Jupyter Notebook on the AWS EC2 instance. However, we will be accessing it through our local computer. For security reasons, we will not publish our Jupyter Notebook to the whole wide world (lol that does spell www).

Instead, we will use the following ssh command to bind our local computer’s tcp port 8888 to the AWS EC2 instance’s tcp port 8888 (this is the port on which the Jupyter Notebook will be running) when we connect to it. This will allow us to access the Jupyter Notebook as if it is running locally on our computer, however the connection will be tunnelled to the AWS EC2 instance.

ssh  -i key.pem ubuntu@public-ip -L8888:localhost:8888

Next, run the following commands to start an instance of Jupyter Notebook

cd fastai
jupyter notebook

After the Jupyter Notebook starts, it will provide a URL to access it, along with the token to authenticate with. Copy it and then paste it into a browser on your local computer.

You will now be able to access the fastai Jupyter Notebook.

Follow the steps below to open Lesson 1.

  • click on the courses folder
  • once inside the courses folder,  click on the  dl1 folder

In the next screen, find the file lesson1.ipynb and double-click it. This will launch the lesson1 Jupyter Notebook in another tab.

Give yourself a big round of applause for reaching so far!

Now, start from the top of lesson1 and go through the first three code sections and execute them. To execute the code, put the mouse pointer in the code section and then press Shift+Enter.

In the next section, change the path to where we moved the cows and horses pictures to. It should look like below

PATH = "data/cowshorses/"

Then, execute this code section.

Skip the following sections

  • Extra steps if NOT using Crestle or Paperspace or our scripts
  • Extra steps if using Crestle

Just a word of caution. The original Jupyter Notebook is meant to distinguish between cats and dogs. However, since we are using it to distinguish between cows and horses, whenever you see a mention of cats, change it to cows and whenever you see a mention of dogs, change it to horses.

The following lines don’t need any changing, so just execute them as they are

os.listdir(PATH)
os.listdir(f'{PATH}valid')

In the next line, replace cats with cows so that you end up with the following

files = !ls {PATH}valid/cows | head
files

Execute the above code. A list of the first 10 cow image files will be displayed.

Next, lets see what the first cow image looks like.

In the next line, change cats to cows to get the following.

img = plt.imread(f'{PATH}valid/cows/{files[0]}')
plt.imshow(img);

Execute the code and you will see the cow image displayed.

Execute the next two code sections. Leave the section after that commented out.

Now, instead of creating a CNN model from scratch, we will use one that was pre-trained on ImageNet which had 1.2 million images and 1000 classes. So it already knows quite a lot about how to distinguish objects. To make it suitable to what we want to do, we will now train it further on our images of cows and horses.

The following defines which model to use and provides the data to train on (the CNN model that we will be using is called resnet34). Execute the below code section.

data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(resnet34, sz))
learn = ConvLearner.pretrained(resnet34, data, precompute=True)

And now for the best part! Lets train the model and give it a learning rate of 0.01.

learn.fit(0.01, 1)

After you execute the above code, the model will be trained on the cows and horses images that were provided in the train folders. The model will then be tested for accuracy by getting it to identify the images contained in the valid folders. Since we already know what the images are of, we can use this to calculate the model’s accuracy.

When I ran the above code, I got an accuracy of 0.75. This is quite good since it means the model can identify cows from horses 75% of the time. Not to forget, we used only 100 cows and 100 horses images to train it, and it didn’t even take that long to train it !

Now, lets see what happens when we give it loads more images to train on.

BTW to get more insights into the results from the trained model,  you can go through all the sections between the lines learning.fit(0.01,1) and Choosing a learning rate.

Another take at training the model

From all the literature I have been reading, one point keeps on repeating. More data means better models. Lets put this to the test.

This time around we will give the model ALL the images we downloaded.

Do the following.

  • on your local computer, move the photos back to the downloaded folder
    • move photos from cows/train to cows/downloaded
    • move photos from cows/valid to cows/downloaded
    • move photos from horses/train to horses/downloaded
    • move photos from horses/valid to horses/downloaded
  • on your local computer, move 100 photos of cows to cows/valid folder and the rest to the cows/train folder
    • move 100 photos from cows/downloaded to cows/valid folder
    • move the rest of the photos from cows/downloaded to cows/train folder
  • on your local computer, move 100 photos for horses to horses/valid and the rest to horses/train folder
    • move 100 photos from horses/downloaded to horses/valid folder
    • move the rest of the photos from horses/downloaded to horses/train folder
  • on the AWS EC2 instance, delete all the photos under the following folders
    • /data/cowshorses/train/cows
    • /data/cowshorses/train/horses
    • /data/cowshorses/valid/cows
    • /data/cowshorses/valid/horses

Use the following commands to copy the images from the local computer to the AWS EC2 Instance

To transfer the cows training set
scp -i key.pem /Users/x/Documents/images/cows/train/*  ubuntu@public-ip::~/data/cowshorses/train/cows

To transfer the horses training set
scp -i key.pem /Users/x/Documents/images/horses/train/*  ubuntu@public-ip::~/data/cowshorses/train/horses

To transfer the cows validation set
scp -i key.pem /Users/x/Documents/images/cows/valid/*  ubuntu@public-ip::~/data/cowshorses/valid/cows

To transfer the horses validation set
scp -i key.pem /Users/x/Documents/images/horses/valid/*  ubuntu@public-ip::~/data/cowshorses/valid/horses

Now that everything has been prepared, re-run the Jupyter Notebook, as stated under Starting Jupyter Notebook above (ensure you start from the top of the Notebook).

When I trained the model on ALL the images (less those in the valid folder) I got an accuracy of 0.95 ! Wow that is soo amazing! I didn’t do anything other than increase the amount of images in the training set.

Final thoughts

In a future blog post, I will show you how you can use the trained model to identify cows and horses from a unlabelled set of photos.

For now, I would highly recommend that you use the above mentioned image downloader to scrape Google for some other datasets. Then use the above instructions to train the model on those images and see what kind of accuracy you can achieve (maybe try identifying chickens and ducks?)

As mentioned before, once finished, don’t forget to shut down your AWS EC2 instance. If you don’t need it anymore, you can terminate it, to save on storage costs as well.

If you are keen about ML, you can check out the courses at http://www.fast.ai (they are free)

If you want to dabble in the maths behind ML, as perviously mentioned, Andrew Ng’s https://www.coursera.org/learn/machine-learning is one of the finest.

Lastly, if you are keen to take on some ML challenges, check out https://www.kaggle.com They have lots and lots competitions running all the time, some of which pay out actual money. There are lots of resources as well and you can learn off others on the site.

Till the next time, Enjoy 😉