Ok Google Email me the status of all vms – Part 2

In my last blog, we configured the backend systems necessary for accomplishing the task of asking Google Home “OK Google Email me the status of all vms” and it sending us an email to that effect. If you haven’t finished doing that, please refer back to my last blog and get that done before continuing.

In this blog, we will configure Google Home.

Google Home uses Google Assistant to do all the smarts. You will be amazed at all the tasks that Google Home can do out of the box.

For our purposes, we will be using the platform IF This Then That or IFTTT for short. IFTTT is a very powerful platform as it lets you create actions based on triggers. This combination of triggers and actions is called a recipe.

Ok, lets dig in and create our IFTTT recipe to accomplish our task

1.1   Go to https://ifttt.com/ and create an account (if you don’t already have one)

1.2   Login to IFTTT and click on My Applets menu from the top

IFTTT_MyApplets_Menu

1.3   Next, click on New Applet (top right hand corner)

1.4   A new recipe template will be displayed. Click on the blue + this choose a service

IFTTT_Reicipe_Step1

1.5   Under Choose a Service type “Google Assistant”

IFTTT_ChooseService

1.6   In the results Google Assistant will be displayed. Click on it

1.7   If you haven’t already connected IFTTT with Google Assistant, you will be asked to do so. When prompted, login with the Google account that is associated with your Google Home and then approve IFTTT to access it.

IFTTT_ConnectGA

1.8   The next step is to choose a trigger. Click on Say a simple phrase

IFTTT_ChooseTrigger

1.9   Now we will put in the phrases that Google Home should trigger on.

IFTTT_CompleteTrigger

For

  • What do you want to say? enter “email me the status of all vms
  • What do you want the Assistant to say in response? enter “no worries, I will send you the email right away

All the other sections are optional, however you can fill them if you prefer to do so

Click Create trigger

1.10   You will be returned to the recipe editor. To choose the action service, click on + that

IFTTT_That

1.11  Under Choose action service, type webhooks. From the results, click on Webhooks

IFTTT_ActionService

1.12   Then for Choose action click on Make a web request

IFTTT_Action_Choose

1.13   Next the Complete action fields screen is shown.

For

  • URL – paste the webhook url of the runbook that you had copied in the previous blog
  • Method – change this to POST
  • Content Type – change this to application/json

IFTTT_CompleteActionFields

Click Create action

1.13   In the next screen, click Finish

IFTTT_Review

 

Woo hoo. Everything is now complete. Lets do some testing.

Go to your Google Home and say “email me the status of all vms”. Google Home should reply by saying “no worries. I will send you the email right away”.

I have noticed some delays in receiving the email, however the most I have had to wait for is 5 minutes. If this is unacceptable, in the runbook script, modify the Send-MailMessage command by adding the parameter -Priority High. This sends all emails with high priority, which should make things faster. Also, the runbook is currently running in Azure. Better performance might be achieved by using Hybrid Runbook Workers

To monitor the status of the automation jobs, or to access their logs, in the Azure Automation Account, click on Jobs in the left hand side menu. Clicking on any one of the jobs shown will provide more information about that particular job. This can be helpful during troubleshooting.

Automation_JobsLog

There you go. All done. I hope you enjoy this additional task you can now do with your Google Home.

If you don’t own a Google Home yet, you can do the above automation using Google Assistant as well.

Advertisements

Ok Google Email me the status of all vms – Part 1

Technology is evolving at a breathtaking pace. For instance, the phone in your pocket has more grunt than the desktop computers of 10 years ago!

One of the upcoming areas in Computing Science is Artificial Intelligence. What seemed science fiction in the days of Isaac Asimov, when he penned I, Robot seems closer to reality now.

Lately the market is popping up with virtual assistants from the likes of Apple, Amazon and Google. These are “bots” that use Artificial Intelligence to help us with our daily lives, from telling us about the weather, to reminding us about our shopping lists or letting us know when our next train will be arriving. I still remember my first virtual assistant Prody Parrot, which hardly did much when you compare it to Siri, Alexa or Google Assistant.

I decided to test drive one of these virtual assistants, and so purchased a Google Home. First impressions, it is an awesome device with a lot of good things going for it. If only it came with a rechargeable battery instead of a wall charger, it would have been even more awesome. Well maybe in the next version (Google here’s a tip for your next version 😉 )

Having played with Google Home for a bit, I decided to look at ways of integrating it with Azure, and I was pleasantly surprised.

In this two-part blog, I will show you how you can use Google Home to send an email with the status of all your Azure virtual machines. This functionality can be extended to stop or start all virtual machines, however I would caution against NOT doing this in your production environment, incase you turn off some machine that is running critical workloads.

In this first blog post, we will setup the backend systems to achieve the tasks and in the next blog post, we will connect it to Google Home.

The diagram below shows how we will achieve what we have set out to do.

Google Home Workflow

Below is a list of tasks that will happen

  1. Google Home will trigger when we say “Ok Google email me the status of all vms”
  2. As Google Home uses Google Assistant, it will pass the request to the IFTTT service
  3. IFTTT will then trigger the webhooks service to call a webhook url attached to an Azure Automation Runbook
  4. A job for the specified runbook will then be queued up in Azure Automation.
  5. The runbook job will then run, and obtain a status of all vms.
  6. The output will be emailed to the designated recipient

Ok, enough talking 😉 lets start cracking.

1. Create an Azure AD Service Principal Account

In order to run our Azure Automation runbook, we need to create a security object for it to run under. This security object provides permissions to access the azure resources. For our purposes, we will be using a service principal account.

Assuming you have already installed the Azure PowerShell module, run the following in a PowerShell session to login to Azure

Import-Module AzureRm
Login-AzureRmAccount

Next, to create an Azure AD Application, run the following command

$adApp = New-AzureRmADApplication -DisplayName "DisplayName" -HomePage "HomePage" -IdentifierUris "http://IdentifierUri" -Password "Password"

where

DisplayName is the display name for your AD Application eg “Google Home Automation”

HomePage is the home page for your application eg http://googlehome (or you can ignore the -HomePage parameter as it is optional)

IdentifierUri is the URI that identifies the application eg http://googleHomeAutomation

Password is the password you will give the service principal account

Now, lets create the service principle for the Azure AD Application

New-AzureRmADServicePrincipal -ApplicationId $adApp.ApplicationId

Next, we will give the service principal account read access to the Azure tenant. If you need something more restrictive, please find the appropriate role from https://docs.microsoft.com/en-gb/azure/active-directory/role-based-access-built-in-roles

New-AzureRmRoleAssignment -RoleDefinitionName Reader -ServicePrincipalName $adApp.ApplicationId

Great, the service principal account is now ready. The username for your service principal is actually the ApplicationId suffixed by your Azure AD domain name. To get the Application ID run the following by providing the identifierUri that was supplied when creating it above

Get-AzureRmADApplication -IdentifierUri {identifierUri}

Just to be pedantic, lets check to ensure we can login to Azure using the newly created service principal account and the password. To test, run the following commands (when prompted, supply the username for the service principal account and the password that was set when it was created above)

$cred = Get-Credential 
Login-AzureRmAccount -Credential $cred -ServicePrincipal -TenantId {TenantId}

where Tenantid is your Azure Tenant’s ID

If everything was setup properly, you should now be logged in using the service principal account.

2. Create an Azure Automation Account

Next, we need an Azure Automation account.

2.1   Login to the Azure Portal and then click New

AzureMarketPlace_New

2.2   Then type Automation and click search. From the results click the following.

AzureMarketPlace_ResultsAutomation

2.3   In the next screen, click Create

2.4   Next, fill in the appropriate details and click Create

AutomationAccount_Details

3. Create a SendGrid Account

Unfortunately Azure doesn’t provide relay servers that can be used by scripts to email out. Instead you have to either use EOP (Exchange Online Protection) servers or SendGrid to achieve this. SendGrid is an Email Delivery Service that Azure provides, and you need to create an account to use it. For our purposes, we will use the free tier, which allows the delivery of 2500 emails per month, which is plenty for us.

3.1   In the Azure Portal, click New

AzureMarketPlace_New

3.2   Then search for SendGrid in the marketplace and click on the following result. Next click Create

AzureMarketPlace_ResultsSendGrid

3.3   In the next screen, for the pricing tier, select the free tier and then fill in the required details and click Create.

SendGridAccount_Details

4. Configure the Automation Account

Inside the Automation Account, we will be creating a Runbook that will contain our PowerShell script that will do all the work. The script will be using the Service Principal and SendGrid accounts. To ensure we don’t expose their credentials inside the PowerShell script, we will store them in the Automation Account under Credentials, and then access them from inside our PowerShell script.

4.1   Go into the Automation Account that you had created.

4.2   Under Shared Resource click Credentials

AutomationAccount_Credentials

4.3    Click on Add a credential and then fill in the details for the Service Principal account. Then click Create

Credentials_Details

4.4   Repeat step 4.3 above to add the SendGrid account

4.5   Now that the Credentials have been stored, under Process Automation click Runbooks

Automation_Runbooks

Then click Add a runbook and in the next screen click Create a new runbook

4.6   Give the runbook an appropriate name. Change the Runbook Type to PowerShell. Click Create

Runbook_Details

4.7   Once the Runbook has been created, paste the following script inside it, click on Save and then click on Publish

Import-Module Azure
$cred = Get-AutomationPSCredential -Name 'Service Principal account'
$mailerCred = Get-AutomationPSCredential -Name 'SendGrid account'

Login-AzureRmAccount -Credential $cred -ServicePrincipal -TenantID {tenantId}

$outputFile = $env:TEMP+ "\AzureVmStatus.html"
$vmarray = @()

#Get a list of all vms 
Write-Output "Getting a list of all VMs"
$vms = Get-AzureRmVM
$total_vms = $vms.count
Write-Output "Done. VMs Found $total_vms"

$index = 0
# Add info about VM's to the array
foreach ($vm in $vms){ 
 $index++
 Write-Output "Processing VM $index/$total_vms"
 # Get VM Status
 $vmstatus = Get-AzurermVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -Status

# Add values to the array:
 $vmarray += New-Object PSObject -Property ([ordered]@{
 ResourceGroupName=$vm.ResourceGroupName
 Name=$vm.Name
 OSType=$vm.StorageProfile.OSDisk.OSType
 PowerState=(get-culture).TextInfo.ToTitleCase(($vmstatus.statuses)[1].code.split("/")[1])
 })
}
$vmarray | Sort-Object PowerState,OSType -Desc

Write-Output "Converting Output to HTML" 
$vmarray | Sort-Object PowerState,OSType -Desc | ConvertTo-Html | Out-File $outputFile
Write-Output "Converted"

$fromAddr = "senderEmailAddress"
$toAddr = "recipientEmailAddress"
$subject = "Azure VM Status as at " + (Get-Date).toString()
$smtpServer = "smtp.sendgrid.net"

Write-Output "Sending Email to $toAddr using server $smtpServer"
Send-MailMessage -Credential $mailerCred -From $fromAddr -To $toAddr -Subject $subject -Attachments $outputFile -SmtpServer $smtpServer -UseSsl
Write-Output "Email Sent"

where

  • ‘Service Principal Account’ and ‘SendGrid Account’ are the names of the credentials that were created in the Automation Account (include the ‘ ‘ around the name)
  • senderEmailAddress is the email address that the email will show it came from. Keep the domain of the email address same as your Azure domain
  • recipientEmailAddress is the email address of the recipient who will receive the list of vms

4.8   Next, we will create a Webhook. A webhook is a special URL that will allow us to execute the above script without logging into the Azure Portal. Treat the webhook URL like a password since whoever possesses the webhook can execute the runbook without needing to provide any credentials.

Open the runbook that was just created and from the top menu click on Webhook

Webhook_menu

4.9   In the next screen click Create new webhook

4.10  A security message will be displayed informing that once the webhook has been created, the URL will not be shown anywhere in the Azure Portal. IT IS EXTREMELY IMPORTANT THAT YOU COPY THE WEBHOOK URL BEFORE PRESSING THE OK BUTTON.

Enter a name for the webhook and when you want the webhook to expire. Copy the webhook URL and paste it somewhere safe. Then click OK.

Once the webhook has expired, you can’t use it to trigger the runbook, however before it expires, you can change the expiry date. For security reasons, it is recommended that you don’t keep the webhook alive for a long period of time.

Webhook_details

Thats it folks! The stage has been set and we have successfully configured the backend systems to handle our task. Give yourselves a big pat on the back.

Follow me to the next blog, where we will use the above with IFTTT, to bring it all together so that when we say “OK Google, email me the status of all vms”, an email is sent out to us with the status of all the vms 😉

I will see you in Part 2 of this blog. Ciao 😉

Automate Secondary ADFS Node Installation and Configuration

Introduction

Additional nodes in an ADFS farm are required to provide redundancy incase your primary ADFS node goes offline. This ensures your ADFS service is still up and servicing all incoming requests. Additional nodes also help in load balancing the incoming traffic, which provides a better user experience in cases of high authentication traffic.

Overview

Once an ADFS farm has been created, adding additional nodes is quite simple and mostly relies on the same concepts for creating the ADFS farm. I would suggest reading my previous blog Automate ADFS Farm Installation and Configuration as some of the steps we will use in this blog were documented in it.

In this blog, I will show how to automatically provision a secondary ADFS node to an existing ADFS farm. The learnings in this blog can be easily used to deploy more ADFS nodes automatically, if needed.

Install ADFS Role

After provisioning a new Azure virtual machine, we need to install the Active Directory Federation Services role on it.  To do this, we will use the same Desired State Configuration (DSE) script that was used in Automate ADFS Farm Installation and Configuration. Please refer to the section Install ADFS Role in the above blog for the steps to create the DSE script file InstallADFS.ps1.

Add to an existing ADFS Farm

Once the ADFS role has been installed on the virtual machine, we will create a Custom Script Extension (CSE) to add it to the ADFS farm.

In order to do this, we need the following

  • certificate that was used to create the ADFS farm
  • ADFS service account username and password that was used to create the ADFS farm

Once the above prerequisites has been met, we need a method for making the files available to the CSE. I documented a neat trick to “sneak-in” the certificate and password files onto the virtual machine by using Desired State Configuration (DSE) package files in my previous blog. Please refer to Automate ADFS Farm Installation and Configuration under the section Create ADFS Farm for the steps.

Also note, for adding the node to the adfs farm, the domain user credentials are not required. The certificate file will be named adfs_certificate.pfx  and the file containing the encrypted adfs service account password will be named adfspass.key.

Assuming that the prerequisites have been satisfied, and the files have been “sneaked” onto the virtual machine, lets proceed to creating the CSE.

Open Windows Powershell ISE and paste the following.

param (
  $DomainName,
  $PrimaryADFSServer,
  $AdfsSvcUsername
)

The above shows the parameters that need to be passed to the CSE where

$DomainName is the name of the Active Directory domain
$PrimaryADFSServer is the hostname of the primary ADFS server
$AdfsSvcUsername is the username of the ADFS service account

Save the file with a name of your choice (do not close the file as we will be adding more lines to it). I named my script AddToADFSFarm.ps1

Next, we need to define a variable that will contain the path to the directory where the certificate file and the file containing the encrypted adfs service account password are stored. Also, we need a variable to contain the key that was used to encrypt the adfs service account password. This will be required to decrypt the password.

Add the following to the CSE file

$localpath = "C:\Program Files\WindowsPowerShell\Modules\Certificates\"
$Key = (3,4,2,3,56,34,254,222,1,1,2,23,42,54,33,233,1,34,2,7,6,5,35,43)

Next, we need to decrypt the encrypted adfs service account password.

Now, we need to import the certificate into the local computer certificate store. To make things simple, when the certificate was exported from the primary ADFS server, it was encrypted using the adfs service account password.

After importing the certificate, we will read it to get its thumbprint.

Up until now, the steps are very similar to creating an ADFS farm. However, below is where the steps diverge.

Add the following lines to add the virtual machine to the existing ADFS farm

You now have a custom script extension file that will add a virtual machine as a secondary node to an existing ADFS Farm.

Below is the full CSE

All that is missing now is the method to bootstrap the scripts described above using Azure Resource Manager templates.

Below is the ARM template that can be used to install ADFS role on the a virtual machine and then add this virtual machine as a secondary node to the ADFS farm

In the above ARM template, the parameter ADFS02VMName refers to the hostname of the virtual machine that will be added to the ADFS Farm.

Listed below are the variables that have been used in the ARM template above

The above method can be used to add as many nodes to the ADFS farm as needed.

I hope this comes in handy when creating an ARM template to automatically deploy an ADFS Farm with additional nodes.

Automate ADFS Farm Installation and Configuration

Introduction

In this multi-part blog, I will be showing how to automatically install and configure a new ADFS Farm. We will accomplish this using Azure Resource Manager templates, Desired State Configuration scripts and Custom Script Extensions.

Overview

We will use Azure Resource Manager to create a virtual machine that will become our first ADFS Server. We will then use a desired state configuration script to join the virtual machine to our Active Directory domain and then to install the ADFS role. Finally, we will use a Custom Script Extension to install our first ADFS Farm.

Install ADFS Role

We will be using the xActiveDirectory and xPendingReboot experimental DSC modules.

Download these from

https://gallery.technet.microsoft.com/scriptcenter/xActiveDirectory-f2d573f3

https://gallery.technet.microsoft.com/scriptcenter/xPendingReboot-PowerShell-b269f154

After downloading, unzip the file and  place the contents in the Powershell modules directory located at $env:ProgramFiles\WindowsPowerShell\Modules (unless you have changed your systemroot folder, this will be located at C:\ProgramFiles\WindowsPowerShell\Modules )

Open your Windows PowerShell ISE and lets create a DSC script that will join our virtual machine to the domain and also install the ADFS role.

Copy the following into a new Windows Powershell ISE file and save it as a filename of your choice (I saved mine as InstallADFS.ps1)

In the above, we are declaring some mandatory parameters and some variables that will be used within the script

$MachineName is the hostname of the virtual machine that will become the first ADFS server

$DomainName is the name of the domain where the virtual machine will be joined

$AdminCreds contains the username and password for an account that has permissions to join the virtual machine to the domain

$RetryCount and $RetryIntervalSec hold values that will be used to  check if the domain is available

We need to import the experimental DSC modules that we had downloaded. To do this, add the following lines to the DSC script

Import-DscResource -Module xActiveDirectory, xPendingReboot

Next, we need to convert the supplied $AdminCreds into a domain\username format. This is accomplished by the following lines (the converted value is held in $DomainCreds )

Next, we need to tell DSC that the command needs to be run on the local computer. This is done by the following line (localhost refers to the local computer)

Node localhost

We need to tell the LocalConfigurationManager that it should continue with the configuration after a reboot, reboot the server if needed and to just apply the settings only once (DSC can apply a setting and constantly monitor it to check that it has not been changed. If the setting is found to be changed, DSC can re-apply the setting. In our case we will not do this, we will apply the setting just once).

Next, we need to check if the Active Directory domain is ready. For this, we will use the xWaitForADDomain function from the xActiveDirectory experimental DSC module.

Once we know that the Active Directory domain is available, we can go ahead and join the virtual machine to the domain.

the JoinDomain function depends on xWaitForADDomain to finish successfully. If xWaitForADDomain fails, JoinDomain will not run

Once the virtual machine has been added to the domain, it needs to be restarted. We will use xPendingReboot function from the xPendingReboot experimental DSC module to accomplish this

Next, we will install the ADFS role on the virtual machine

Our script has now successfully added the virtual machine to the domain and installed the ADFS role on it. You now have to create a zip file with InstallADFS.ps1 and upload it to a location that Azure Resource Manager can access (I would recommend uploading to GitHub). Include the xActiveDirectory and xPendingReboot experimental DSC modules. Also add a folder called Certificates inside the zip file and put the ADFS certificate and the encrypted password files (discussed in the next section) inside the folder.

In the next section, we will configure the ADFS Farm.

The full InstallADFS.ps1 DSC script is pasted below

Create ADFS Farm

Once the ADFS role has been installed, we will use Custom Script Extensions (CSE) to create the ADFS farm.

One of the requirements to configure ADFS is a signed certificate. I used a 90 day trial certificate from Comodo.

There is a trick that I am using to make my certificate available on the virtual machine. If you bootstrap a DSC script to your virtual machine in an Azure Resource Manager template, the script along with all the non out-of-box DSC modules have to be packaged into a zip file and uploaded to a location that ARM can access. Then, before ARM uses the script, it downloads the zip file and unzips it, and copies the directories inside the zip file to $env:ProgramFiles\WindowsPowerShell\Modules ( C:\ProgramFiles\WindowsPowerShell\Modules )

I am using this feature to sneak my certificate on to the virtual machine. I create a folder called Certificates inside the zip file containing the DSC script and put the certificate inside it. Also, I am not too fond of passing plain passwords from my ARM template to the CSE, so I created two files, one to hold the encrypted password for the domain administrator account and the other to contain the encrypted password of the adfs service account. These two files are named adminpass.key and adfspass.key and will be placed in the Certificates folder within the zip file.

I used the following to generate the encrypted password files

$AdminSecurePassword = ConvertTo-SecureString {AdminPlainTextPassword} -AsPlainText -Force
ConvertFrom-SecureString $AdminSecurePassword -Key $key > adminpass.key

$ADFSSecurePassword = ConvertTo-SecureString {ADFSPlainTextPassword} -AsPlainText -Force
ConvertFrom-SecureString $ADFSSecurePassword -Key $key > adfspass.key

The $key  is used to convert the secure string into an encrypted standard string. Valid key lengths are 16, 24, 32

For this blog, we will use

$Key = (3,4,2,3,56,34,254,222,1,1,2,23,42,54,33,233,1,34,2,7,6,5,35,43)

Open Windows Powershell ISE and paste the following (save the file with a name of your choice. I saved mine as ConfigureADFS.ps1)

param (
 $DomainName,
 $DomainAdminUsername,
 $AdfsSvcUsername
)

These are the parameters that will be passed to the CSE

$DomainName is the name of the Active Directory domain
$DomainAdminUsername is the username of the domain administrator account
$AdfsSvcUsername is the username of the ADFS service account

Next, we will define the value of the Key that was used to encrypt the password and the location where the certificate and the encrypted password files will be placed

$localpath = "C:\Program Files\WindowsPowerShell\Modules\Certificates\"
$Key = (3,4,2,3,56,34,254,222,1,1,2,23,42,54,33,233,1,34,2,7,6,5,35,43)

Now, we have to read the encrypted passwords from the adminpass.key and adfspass.key file and then convert them into a domain\username format

Next, we will import the certificate into the local computer certificate store. We will mark the certificate exportable and set the password same as the domain administrator password.

In the above after the certificate is imported,  $cert is used to hold the certificate thumbprint

Next, we will configure the ADFS Farm

The ADFS Federation Service displayname is set to Active Directory Federation Service and the Federation Service Name is set to fs.adfsfarm.com

Upload the CSE to a location that Azure Resource Manager can access (I uploaded my script to GitHub)

The full ConfigureADFS.ps1 CSE is shown below

Azure Resource Manager Template Bootstrapping

Now that the DSC and CSE scripts have been created, we need to add them in our ARM template, straight after the virtual machine is provisioned.

To add the DSC script, create a DSC extension and link it to the DSC Package that was created to install ADFS. Below is an example of what can be used

The extension will run after the ADFS virtual machine has been successfully created (referred to as ADFS01VMName)

The MachineName, DomainName and domain administrator credentials are passed to the DSC extension.

Below are the variables that have been used in the json file for the DSC extension (I have listed my GitHub repository location)

Next, we have to create a Custom Script Extension to link to the CSE for configuring ADFS. Below is an example that can be used

The CSE depends on the ADFS virtual machine being successfully provisioned and the DSC extension that installs the ADFS role to have successfully completed.

The DomainName, Domain Administrator Username and the ADFS Service Username are passed to the CSE script

The following contains a list of the variables being used by the CSE (the example below shows my GitHub repository location)

"repoLocation": "https://raw.githubusercontent.com/nivleshc/arm/master/",
"ConfigureADFSScriptUrl": "[concat(parameters('repoLocation'),'ConfigureADFS.ps1')]",

That’s it Folks! You now have an ARM Template that can be used to automatically install the ADFS role and then configure a new ADFS Farm.

In the next blog, we will explore how to add another node to the ADFS Farm and we will also look at how we can automatically create a Web Application Proxy server for our ADFS Farm.