Introduction
Cloud computing empowers people by letting them provision compute resources quickly and cheaply. For short-term projects, you do not need to purchase a physical server, which might end up un-used after the project finishes. Instead, with cloud computing you can provision the resources you need quickly, pay for the duration that you use them and once you have finished, deprovision them so that you are no longer charged.
AWS provides a generous free tier which enables one to learn about all that is available within AWS. Unfortunately, I have heard of stories where people forget to turn off their Amazon EC2 instances, only to find out at the end of the month that they have used up all of their free tier entitlements for the month and instead have to pay for the expenses out of their pocket (for such scenarios, I would recommend contacting AWS Support as they are quite understanding and can provide you with a credit to offset the out-of-pocket expenses).
In this blog, I will introduce you to a solution I developed sometime back, to provide visibility of all the running Amazon EC2 instances. Daily, at a predefined time, a check is done in all the Amazon regions for any running Amazon EC2 instances. If any are found, a Slack message is posted into a channel, with details about these running Amazon EC2 instances. You can then act accordingly.
High Level Architecture
Below is the high-level architecture for the solution.
The steps are described below:
- Everyday, at a predefined time, Amazon CloudWatch Events will trigger an AWS Lambda function.
- The AWS Lambda function will check all the AWS regions for any running Amazon EC2 instances.
- If any running Amazon EC2 instances are found, the AWS Lambda function will post a slack message in a channel, with details about the running Amazon EC2 instances.
Let’s get started.
Create a free Slack Workspace
For those that are unaware, Slack provides a free plan for creating your own Slack Workspace. The entitlements for the free tier are more than sufficient for the solution described in this blog.
Follow the instructions below to create a free Slack Workspace for yourself (unless you already have one):
- Go to https://slack.com/intl/en-au/pricing and create a new Workspace.
- Next, in your Slack Workspace, create a Slack channel where the notifications from the AWS Lambda function will be published (I named my Slack channel aws-notifications).
- Next, we need to create a Slack App. This will enable us to publish notifications from our AWS Lambda function.
- Go to https://api.slack.com/apps?new_app=1 and click Create New App.
- In the next screen, click From scratch.
- In the next screen, give your App a name (I named my app monitoring-bot) and pick the Workspace you want to develop your app in (choose the Workspace that you created above). Click Create App.
- In the next screen, in the left-hand side menu, Basic Information should be selected. In the right-hand side screen, click Incoming Webhooks.
- In the next screen, use the slider beside Activate Incoming Webhooks to turn it on.
- Scroll down the page and click Add New Webhook to Workspace.
- In the next screen, select the channel that you created above, for the App to publish new messages into. Click Allow.
-
You will be returned to the previous screen. Confirm that the left-hand side menu has Incoming Webhooks selected. The newly created webhook will be visible in the right-hand side screen under Webhook URL. Click the Copy button under Webhook URL and save it for later use.
A sneak peek into the code that does all the magic
I have used an AWS Serverless Application Model (SAM) template to provision the required AWS resources. The AWS Lambda function code is written in Python.
Below is the AWS SAM template.yaml file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
AWSTemplateFormatVersion: '2010-09-09' | |
Transform: AWS::Serverless-2016-10-31 | |
Description: SAM template – send notifications for running ec2 instances | |
Parameters: | |
SlackWebhookURL: | |
Type: String | |
Description: Slack webhook url for sending notifications | |
Resources: | |
monitorEC2InstancesFunction: | |
Type: AWS::Serverless::Function | |
Properties: | |
Description: This function will monitor all EC2 instances on a schedule and send a slack notification when any are found running. | |
Handler: src/monitor-ec2-instances.lambda_handler | |
Runtime: python3.7 | |
Timeout: 300 | |
Events: | |
CloudWatchEventsSchedule: | |
Type: Schedule | |
Properties: | |
Schedule: "cron(0 18 * * ? *)" | |
Name: CheckForRunningEC2Instances | |
Description: Check for running ec2 instances | |
Enabled: True | |
Policies: | |
– AWSLambdaBasicExecutionRole | |
– EC2DescribePolicy: {} | |
Environment: | |
Variables: | |
SLACK_WEBHOOK_URL: !Ref SlackWebhookURL |
You will notice that the Amazon CloudWatch Event is set to trigger the AWS Lambda function daily at 08:00 UTC (6pm AEST). Feel free to change this, if you prefer a different time (I would recommend setting the time to when you are awake, so that if need be, you can turn off the running Amazon EC2 instances).
Next, lets have a look at the Python code that does all the heavy lifting.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import os | |
import json | |
import boto3 | |
import requests | |
def send_slack_message(slack_webhook_url, slack_message): | |
print('>send_slack_message:slack_message:'+slack_message) | |
slack_payload = { | |
'text': slack_message | |
} | |
print('>send_slack_message:posting message to slack channel') | |
response = requests.post(slack_webhook_url, json.dumps(slack_payload)) | |
response_json = response.text # convert to json for easy handling | |
print('>send_slack_message:response after posting to slack:'+str(response_json)) | |
def find_running_ec2instances(): | |
regions = ['us-east-1', 'us-east-2', 'us-west-1', 'us-west-2', 'ap-south-1', 'ap-northeast-1', 'ap-northeast-2', 'ap-northeast-3', | |
'ap-southeast-1', 'ap-southeast-2', 'ca-central-1', 'eu-central-1', 'eu-west-1', 'eu-west-2', 'eu-west-3', 'eu-north-1', 'sa-east-1'] | |
notification_message = 'The following EC2 instance(s) are currently running and are costing you money. Turn them off if you have finished using them: \n' | |
slack_webhook_url = os.environ['SLACK_WEBHOOK_URL'] | |
# find running instances in each of the regions | |
total_running_ec2_instances = 0 | |
for region in regions: | |
client = boto3.client("ec2", region_name=region) | |
running_ec2_instances = client.describe_instances( | |
Filters=[ | |
{ | |
'Name': 'instance-state-name', | |
'Values': [ | |
'running' | |
] | |
} | |
], | |
MaxResults=1000, | |
) | |
num_running_ec2_instances = len(running_ec2_instances['Reservations']) | |
if num_running_ec2_instances > 0: | |
# there is at least one running instance in this region | |
total_running_ec2_instances += num_running_ec2_instances | |
for instance in running_ec2_instances['Reservations']: | |
ec2_info = 'InstanceType:' + instance['Instances'][0]['InstanceType'] + ' LaunchTime(UTC):' + str(instance['Instances'][0]['LaunchTime']) | |
ec2_info += ' PrivateIpAddress:' + instance['Instances'][0]['PrivateIpAddress'] | |
try: | |
ec2_info += ' PublicIpAddress:' + instance['Instances'][0]['PublicIpAddress'] | |
except: | |
print('>find_running_ec2instances:this is a private instance – no public ip address found') | |
# find the name of this instance, if it exists | |
ec2_instance_name = '' | |
try: | |
tags = instance['Instances'][0]['Tags'] | |
# find a tag with Key == Name. This will contain the instance name. If no such tag exists then the name for this instance will be reported as blank | |
for tag in tags: | |
if tag['Key'] == 'Name': | |
ec2_instance_name = tag['Value'] | |
except: | |
ec2_instance_name = '' # if no tags were found, leave ec2 instance name as blank | |
ec2_info = 'Region:' + region + ' Name:' + ec2_instance_name + ' ' + ec2_info | |
print('>find_running_ec2instances:running ec2 instance found:' + str(ec2_info)) | |
notification_message += ec2_info + '\n' | |
print('>find_running_ec2instances:Number of running ec2-instances[' + region + ']:'+str(num_running_ec2_instances)) | |
print('>find_running_ec2instances:Total number of running ec2_instances[all regions]:'+str(total_running_ec2_instances)) | |
print('>find_running_ec2instances:Slack notification message:' + notification_message) | |
if total_running_ec2_instances > 0: | |
send_slack_message(slack_webhook_url, notification_message) | |
return total_running_ec2_instances | |
def lambda_handler(event, context): | |
num_running_instances = find_running_ec2instances() | |
return { | |
'statusCode': 200, | |
'body': json.dumps('Number of EC2 instances currently running in all regions:' + str(num_running_instances)) | |
} |
The code is quite simple, it goes through all the defined AWS regions and queries for any running Amazon EC2 instances. If any are found, their details are added to a message, which then gets sent to the Slack channel that was created above (this is done using the Slack App Webhook URL that we had created above).
The Slack message contains the following attributes of each of the running Amazon EC2 instances:
- The AWS Region that the instance is running in
- The name of the Amazon EC2 instance (it is assumed that the Amazon EC2 instance has a tag called Name which has the name as its value)
- The Instance type of the Amazon EC2 instance
- The time that the instance was launched (in UTC)
- The private IP address of the Amazon EC2 instance
- The public IP address (if it exists) for the Amazon EC2 instance
Below are the contents of the Makefile that I am using. This truly makes life easy as it creates shortcuts for running long sets of commands.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#define variables | |
aws_profile = ${AWS_PROFILE_NAME} | |
aws_s3_bucket = ${AWS_S3_BUCKET_NAME} | |
aws_s3_bucket_prefix = monitor-ec2-instances | |
aws_stack_name = monitor-ec2-instances | |
aws_stack_iam_capabilities = CAPABILITY_IAM | |
sam_package_template_file = template.yaml | |
sam_package_output_template_file = package.yaml | |
all: usage | |
.PHONY: all | |
usage: | |
@echo make package – package the sam application and copy it to the s3 bucket [s3://${aws_s3_bucket}/${aws_s3_bucket_prefix}/] | |
@echo make deploy – deploy the packaged sam application to AWS | |
@echo make update – package the sam application and then deploy it to AWS | |
@echo make validate – validate template file [${sam_package_template_file}] | |
@echo make clean – delete local package.yml file | |
.PHONY: usage | |
package: | |
sam package –template-file ${sam_package_template_file} –output-template-file ${sam_package_output_template_file} –s3-bucket ${aws_s3_bucket} –s3-prefix ${aws_s3_bucket_prefix} –profile ${aws_profile} | |
.PHONY: package | |
deploy: | |
sam deploy –template-file ${sam_package_output_template_file} –stack-name ${aws_stack_name} –capabilities ${aws_stack_iam_capabilities} –profile ${aws_profile} –parameter-overrides ParameterKey=SlackWebhookURL,ParameterValue=${SLACK_WEBHOOK_URL} | |
.PHONY: deploy | |
update: | |
make clean | |
make package | |
make deploy | |
.PHONY: update | |
validate: | |
sam validate –template-file ${sam_package_template_file} | |
.PHONY: validate | |
clean: | |
rm -f ./${sam_package_output_template_file} | |
.PHONY: clean |
Provisioning the AWS Resources
To provision the above solution, follow the steps below:
- Clone the repository from https://github.com/nivleshc/blog-aws-notify-on-running-instances.git
-
Export the values for the following environment variables.
export AWS_PROFILE_NAME={aws profile to use} export AWS_S3_BUCKET_NAME={name of aws s3 bucket to store SAM artefacts export SLACK_WEBHOOK_URL={slack webhook url to use for sending slack notifications – this is what we had created above}
-
Once done, run the following commands to deploy the solution to your AWS environment:
make package make deploy
- Once completed, you should be able to see the resources deployed into your AWS Account. To test, you can manually trigger the AWS Lambda function. If it finds any running Amazon EC2 instances, a slack message will be posted to the channel that the Slack Webhook is configured to.
-
To deploy any changes after you have provisioned the solution, run the following command:
make update Note: if you have made any changes to template.yaml, then run the below command before running make update (and fix any issues identified) make validate
Below is a screenshot of the message I received, notifying me of two Amazon EC2 instances that were running.
I have been using this solution for a while now and it has proven invaluable, especially when I have deployed Amazon EC2 instances in regions other than my primary region and forgotten that were still running!
The solution currently checks for running Amazon EC2 instances in the following AWS regions (the list can be easily updated within the Python code as new AWS regions are announced):
- us-east-1
- us-east-2
- us-west-1
- us-west-2
- ap-south-1
- ap-northeast-1
- ap-northeast-2
- ap-northeast-3
- ap-southeast-1
- ap-southeast-2
- ca-central-1
- eu-central-1
- eu-west-1
- eu-west-2
- eu-west-3
- eu-north-1
- sa-east-1
I hope this solution proves invaluable in keeping your AWS costs down by providing you visibility of any running Amazon EC2.
Till the next time, stay safe.